var/home/core/zuul-output/0000755000175000017500000000000015137413354014533 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015137420647015502 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000244247215137420567020277 0ustar corecorew!~ikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfhuՅ2|"mv?_eGbuuțx{w7ݭ7֫d% oo/q3m^]/o?8.7oW}ʋghewx/mX,ojŻ ^Tb3b#׳:}=p7뼝ca㑔`e0I1Q!&ѱ[/o^{W-{t3_U|6 x)K#/5ΌR"ggóisR)N %emOQ/Ϋ[oa0vs68/Jʢ ܚʂ9ss3+aô٥J}{37FEbп3 FKX1QRQlrTvb)E,s)Wɀ;$#LcdHM?<;~9iL[NrrݝE)~QGGAj^3}wy/{4 woW>)&c(z$5jlUi_η*t:%?vEmO5tqÜ3Cyu '~qlN?}|nLFR6f8yWxYd ;K44|CK4Ut m2m`QɢJ[a|$ᑨj:D+w4r_۾8ZJ%PgS!][5ߜQZ݇~- MR5r~k0ߐNPJ|U[zz*з 55E9Fa?Zk80ݞN|:AОNo;Ⱦzu\0Ac/T%;m ~S`#u.Џ1qNp&gK60nqtƅ": C@!P q]G0¿Øp*vxyPLSMY 9J}t/A`*t) OWk}\+`/pΣ@Kl"Ҕ&X%7׷v] gv6دϾDD}c6  %T%SE:C5/=v-}҅"o ']쌕|tϓX8nJ*A*%J[T2pI1Je;s_[,Ҩ38_ь ͰM0ImZ_0΁?kMPc_Ԝ*΄Bs`kmJ?t 53@?jځ(7h?cFnEOאM*CJ68?%tS KK3,87'T`ɻaNhIcn#T[2XDRcm0TJ#r)٧4!)'qϷכrTMiHe1[7c(+!C[KԹҤ 0q;;xG'ʐƭ5J; 6M^ CL3EQXy0Hy[``Xm635o,j&X}6$=}0vJ{*.Jwlη˽C Ӹ!uWȳ)gjw&+uߕt*:͵UMQrN@fYW9,,&h' UCqK٪L.2teB ˛"ո{Gci`du듎q+;C'16FgVlWaaB)"F,u@30YQg˾_}Y,iǑ wV.TD=VAKNl4Kš4GScѦa0 J ()¾5m'p/\խf=A}h&VH=l}.^\ݧM<lu Y> XH\z:dHElL(uHR0i#q%]K&{>.}y7Z,).Y톯h7n%PAUË?/,z_v}9U}'/o uSH<:˷ǃnxuoỴRZ&Yzbm ]) %1(Y^9{q"4e?x+ [Vz;E|d1&ږ/0-VELJsC>?5El5uAߙXC90뼯nNNXYt\oP@gOV ]cӰJ:^q';{Ny[$ {ɴ6hOI']dC5`t9:GO: FmlN*:g^;T^B0$B%C6Θ%؃I'㼃{buU?zT u]68 QeC Hl @R SFZuU&uRz[2(AGƂ%k1羨(zv:U!2`cV, lNdV5m$/KFS#E3_F;TM\jP']6 Bvaijz,lzvK2Zu瘹~[ :):[gCa?\&IpW$8!+Uph*/ o/{")qq҈78݇hA sTB*F$6 2C` |ɧJ~iM cO;m#NV?d?TCg5otޔC1s`u.EkB6ga׬9J2{;+Fi7Z(ZN~;MM/u2}ݼPj.R2>e3VWX@&avF묇cTy^}m .Ŏ7Uֻ󂊹P-\!3^.Y9[XԦo Έ')Ji.VՕH4~)(k!fc̖F4BJ2ᮚ苮p(r%Q 6<$(Ӣ(RvA A-^dX? I,($F{ձ7*Oy 6EK( EF #31J8mN .TTF9㕴/5~RxCe,&v3,JE- ZF5%Da,Gܠ*qI@qlG6s푻jÝ$ >8ȕ$eZ1j[h0SH,qf<"${/ksBK}xnwDb%M6:K<~̓9*u᛹Q{FЖt~6S#G1(zr6<ߜ!?U\(0EmG4 4c~J~]ps/9܎ms4gZY-07`-Id,9õ԰t+-b[uemNi_󈛥^g+!SKq<>78NBx;c4<ニ)H .Pd^cR^p_G+E--ۥ_F]a|v@|3p%kzh|k*BBRib\J3Yn|뇱[FfP%M:<`pz?]6laz5`ZQs{>3ư_o%oU׆]YLz_s߭AF'is^_&uUm$[[5HI4QCZ5!N&D[uiXk&2Bg&Ս7_/6v_cd쿽d@eU XyX2z>g8:.⺻h()&nO5YE\1t7aSyFxPV19 ĕi%K"IcB j>Pm[E[^oHmmU̸nG pHKZ{{Qo}i¿Xc\]e1e,5`te.5Hhao<[50wMUF􀍠PV?Yg"ź)\3mf|ܔMUiU|Ym! #'ukMmQ9Blm]TO1ba.XW x6ܠ9[v35H;-]Um4mMrW-k#~fؤϋu_j*^Wj^qM `-Pk.@%=X#|ۡb1lKcj$׋bKv[~"N jS4HOkeF3LPyi︅iWk! cAnxu6<7cp?WN $?X3l(?  'Z! ,Z.maO_Bk/m~ޖ(<qRfR"Au\PmLZ"twpuJ` mvf+T!6Ѓjw1ncuwo':o gSPC=]U҅yY9 &K<-na'Xk,P4+`Þ/lX/bjFO.= w ?>ȑ3n߿z,t s5Z/ Clo-` z?a~b mzkC zFȏ>1k*Dls6vP9hS  ehC.3 @6ijvUuBY hBnb[ Fr#D7ćlA!:X lYE>#0JvʈɌ|\u,'Y˲.,;oOwoj-25Hݻ7 li0bSlbw=IsxhRbd+I]Y]JP}@.供SЃ??w w@KvKts[TSa /ZaDžPAEư07>~w3n:U/.P珀Yaٳ5Ʈ]խ4 ~fh.8C>n@T%W?%TbzK-6cb:XeGL`'žeVVޖ~;BLv[n|viPjbMeO?!hEfޮ])4 ?KN1o<]0Bg9lldXuT ʑ!Iu2ʌnB5*<^I^~G;Ja߄bHȌsK+D"̽E/"Icƀsu0,gy(&TI{ U܋N5 l͖h"褁lm *#n/Q!m b0X3i)\IN˭% Y&cKoG w 9pM^WϋQf7s#bd+SDL ,FZ<1Kx&C!{P|Ռr,* ] O;*X]Eg,5,ouZm8pnglVj!p2֬uT[QyB402|2d5K: `Bcz|Rxxl3{c` 1nhJzQHv?hbºܞz=73qSO0}Dc D]ͺjgw07'㤸z YJ\Hb9Ɖ„2Hi{(2HFE?*w*hy4ޙM^٫wF(p]EwQzr*! 5F XrO7E[!gJ^.a&HߣaaQÝ$_vyz4}0!yܒ栒޹a% Ŋ X!cJ!A\ ?E\R1 q/rJjd A4y4c+bQ̘TT!kw/nb͵FcRG0xeO sw5TV12R7<OG5cjShGg/5TbW > ]~Wޠ9dNiee$V[\[Qp-&u~a+3~;xUFFW>'ǣC~방u)т48ZdH;j a]`bGԹ#qiP(yڤ~dO@wA[Vz/$NW\F?H4kX6)F*1*(eJAaݡ krqB}q^fn 8y7P  GRޠkQn>eqQntq"Occ°NRjg#qSn02DŔw:ؽ 5l)Fa/TTmCԤ{"9b{ywSXE*m#3U ùRIvޏrJ`k|wJKH:O*OKy`( ݢe*{ ua ȻݔhvOkU~OǠI/aǕ-JMX _.6KsjA Qsmd  O#F.Uf28ZAgy>y,d$C?v01q5e.Um>]RLa&r?+@6k&#l)I5_> ` D s5npo}/ؙq #a2V?X~.4O/'|/_|&q̑0dd4>vk 60D _o~[Sw3ckpkpLNa ^j 5*<&}kˢmqvۗj=<Tr=[ a^؃ È(<^=xZb [_tܡ&yЋ{ Sym^?̑sU~' Ԓ f\itu)b>5X -$s?{WƱPz;| \;_D[T/BI GH8@"t*"9E[/Y5d{zrBܖ6Hlc "mKv~[uLU4lZ;xEN'oI㤛rP*jC# 6@dmHg1$ʇȠh#CBΤ{sTQ{%w)7@y1K^ ].Y$46[B-3%OONw8d`Q4d$x0t8@t]y1T\YAidtxBG:pɨyeNg4n]M؞ e}Wn6׳i~'ہZ*FU{fXڃP'Hl4 ,ŸqMHDCYZz Qnz܁$Jp04ȴIL΃.0FiO-qy)i_TA|S2G4miBȨHM(2hys|F 94 DNlϒòκ-q|xC ,gKDzHR%t+E/wd#礱ºȄWEz o\JξB.wLKZ39(M +(PWՇfR6#ю3Ȋt ݪbh]MTw䀩S]'qf&)-_G;"1qz퇛0,#yiq$ՁɄ)KٮޓJ|̖D?:3mhW=rOf'/wѹ8BS8]`;=?,ڼ"ϴq*(A7? /W= #^ub"6q f+=^OI@߱^F[n4A#bYѤwd)J^Z{*ǥzw73LuaVad=$6)iI gC~.1%YmҪ+2gSt!8iIۛ*JgE7LGoş\bC}O i ycK1YhO6 /g:KT sPv6l+uN|!"VS^΄t*3b\N7dYܞLcn3rnNd8"is"1- ޑܧd[]~:'#;N(NknfV('I rcj2J1G<5 Nj̒Qh]ꍾZBn&Un' CyUM0nCj.&Oڣg\q0^Ϻ%4i" ZZG>Xr'XKc$2iσֹH<6N8HSg>uMik{Fm(W F@@{W+ߑ?X2hS4-=^YgpUHެbZ!y!ul@ڼ63" ۩:6=TZõ$E,ϓRV|G&$rr;J TtIHFE=RȬ]P pLm|?$%>Eü%mWO[>Xmw,*9.[G n >X8Ī;xW%dT:`ٓ~:QO,}j6j!yڦʲT:Pqҋh] H+&=>g| Z;D8ܶb:! Å{2:+au 6:!fF+0#+̬NY"!6a7#񕪰%:r|o5Znڧs?si/W qEU馥˟^_޶oڷOj'?nc]Rn\t3^邳塨Lɏ"8k8M~?M}OAH$77f|lgn I;.K*!<+"eK5c&`X:#;@B@[(K44sBFu M.MNWLlY]K᜴=/ VމYlϿ4i36$>m|_>9|dUA"{!$jKx E$K3hN(tÊ-#v#O N, 9g80Ǭ&VdӞ5W1!1KYd`,-*&>F~⯰&jb.~cNk BL_OG]Bv.A|'qT(Ol.' 4IE|@Iі)<-p JkQm1 `qacܗVc?)cl*&<}P媠E{-sVU>߇GUt\+n3X]Byoz)li$2cPs6D>TE-n# rve{椱I |p)U݋7yJw&PzDgi xs  xh\L r Ѥo Zt(I >|$>tnMdэoV#ہll/ؽnA(ȱbAj>C9O n6HNe">0]8@*0)QsUN8t^N+mXU q2EDö0^R) hCt{d}ܜFnԴ.2w⠪R/r| w,?VMqܙ7;qpUۚ5Tnj ۝jlN$q:w$U>tL)NC*<` `)ĉJآS2 z]gQ)Bی:D`W&jDk\7XD&?Y\9ȢG:${1`+i n8=%Ml%İȖb7AޗuV3A7ำqE*\qb'YpuHƩҬV nm=Ɂ-2=|5ʹ zi ' ׹U>8bK0%V\ t!Lku`+]c0h&)IVC)p| QUA:]XL/2La[Xѓ F;/-rtx-rei0hE˝ݸDt#{I} `v;jUvK S x1Q2XU&6k&lE"} Q\E)+u>.,SzbQ!g:l0r5aI`"Ǒm O\B!,ZDbjKM%q%Em(>Hm 2z=Eh^&hBk X%t>g:Y #)#vǷOV't d1 =_SEp+%L1OUaY쎹aZNnDZ6fV{r&ȑ|X!|i*FJT+gj׾,$'qg%HWc\4@'@—>9V*E :lw)e6;KK{s`>3X: P/%d1ؑHͦ4;W\hx锎vgqcU!}xF^jc5?7Ua,X nʬ^Cv'A$ƝKA`d;_/EZ~'*"ȜH*Duƽ˳bKg^raͭ̍*tPu*9bJ_ ;3It+v;3O'CX}k:U{⧘pvzz0V Y3'Dco\:^dnJF7a)AH v_§gbȩ<+S%EasUNfB7™:%GY \LXg3۾4\.?}f kj· dM[CaVۿ$XD'QǛU>UݸoRR?x^TE.1߬VwխmLaF݄",Uy%ífz,/o/Z^]ݖF\\UR7򱺹...m/~q[ /7n!7xB[)9nI [GۿsH\ow!>66}եl?|i [%۾s& Z&el-ɬeb.E)բA l1O,dE>-KjLOgeΏe|Bf".ax)֒t0E)J\8ʁ,Gulʂ+lh)6tqd!eó5d ¢ku|M"kP-&ђ5h ^pN0[|B>+q"/[ڲ&6!%<@fpѻKQ31pxFP>TU?!$VQ`Rc1wM "U8V15> =҆#xɮ}U`۸ہt=|X!~Pu(UeS@%Nb:.SZ1d!~\<}LY aBRJ@ѥuȑz.# 3tl7 ]وb Xnݔ[TN1|ttc‡-5=VrPhE0Ǐ}Wd|\aD;(;Ha.]1-{s1`HbKV$n}Z+sz'ʀ*E%N3o2c06JZW?V g>ed\)g.C]pj|4逜*@ nBID f"!!*7kS4޷V+8弔*A19`RI/Hй qPq3TY'퀜+/Ĥ'cp2\1: 0mtH,.7>\hSؗ΀ѩ آSNEYdEcaLF&"FhQ|![gIK v~,Jc%+8[dI368fp*CDrc3k.2WM:UbX[cO;R`RA]d+w!e rr솜[/V`+@;Τ`5d0ϕ_Lع`C"cK>JG.}Ε00e>& 2䯫vNj31c$ i '2Sn-51Y}rE~b>|Ď6Oj~ebIapul9| 3QtUqSCxTD7U9/nq.JYCtuc nrCtVDƖϧ;INOKx%'t+sFUJq:ǫf!NRT1D(3.8Q;І?O+JL0SU%jfˬ1lމZ|VA/.ȍȱh M-r ~[0AG꠭y*8D*-Rz_z{/S[*"꫒?`a;N6uilLn<Yllmb rY״͆jqTI!j.Pٱh s!:W_´KxA|Hk1nE6=W|$O -{]1Ak$ ѫQ6Plp;3F$RveL l5`:~@c>q,7}VE-Q8W70up˳ A¦g/OEU:غA>?=CۣPqȅlW11/$f*0@б 2Dݘrt +qro:ǀl}9SHa߽c&kwfy+_jVo,?̴7Nj.v\חq,on[{Uw7z(^Pm:>I+Ȧ6' ,}U=̀*Eg.6_~OJ/8V ?ç&+|t><,BLqL򱷬dS{X6"X#-^䀕#{К4i̎'QIc(<ǩJi lc*n;YK?mc+/ \nSq&9(%w)2|V3KR^Y$m=kgwO`D0hA'Xd';Jd z jSE:\O5CU%iPu U! Rwxqɷ҆;HcRAx`RbTꢬ6Ta=K.Q!R$2Tp)O!؃`J,@O bq|?TUX3,+ "Y %M CUhķ*1<|(|</8`LOtFy9yZ S,.qpO8oɽ9JP% d@OIى'n__DIHH$t^_+cC=4mn,lFݗŕc+ssuF@^2m­$/P G6d!3vh2'CMc:nĬMZ3,@nƷ0nh\5$+Z.j9zvtWk?/^1Wcrt2~j=?#:@,E=uOf9, Zr^2 \2 M߰^n~ Df$⦶RDC;r!>9mB!>Θւ"h9Tר FawG,dai־ZZUoi!BWo? J7 JwtʴǦN< 쵣ͣ:jЂWXPZXIt<>-1LqM@K&˵uX}On8nkDH~}y *cr-(wL݅"wBohP&sɾQ'Y,Y9jDF~QȢPfX2XRG%BeuNa/ƥޖJ(VE"BL8k@fMAGTņ:>-E9& =Y P8X.bP9(<1S_崠)(V1*N@||$e騚QW\9R?[Sԛ/hh^UyzrOwIEr<%x%{]ayIu2D1JYL^@M-gKGJtCJUh.*9$v9-lU9e8I  ĽPH~EE_P =S: -#-xt3#]"/ |! ٞd{c-?Q{`< B=#dDCXS^&XYzyBѺݹ|;u5wRf%=3gYo7^NxiQUNiNl =C#doVy*/(0]'"Ai ? ,\*ͥܓ^Xc6p^>)Ԛ{o$rW&ә!A/Y2ߗr㣔@(>EiP(FS>^ N0 ,sp _v~ 7_%(P(q=1t%Uw`qYXX0˼rџ;sє=PuMl?ƳExiG 'ޯt4HxhiHUjdaOm2Cd.BTh8i巅iFr.P"5gjQQD$nbFŇn,D 4s^x L26t+S‚cN)P`W?dg<;}K9Y Mb:DY5횈88kʽ-UpYz*]/Nϒ!B,Wr6:+BW:l1.(P@ YQeŠ_B@]4KX3u8cY+b9i m!ZG>dkvQ(>ګEà lwO W[kR|7YAYc~rfD_ $LJw'o_42&HGEms>H-I gZ4ⲑ4 qt7{lQ#(*N<ZP\ђ8##eTJ@`.QD'G5Zd%ɦ/ ( ƏNl6Jnp75mWhDaEa"}<&x2xG-fM/$/D80MɤMD k7=WF3G[eY\N[]8ڮ⡄3YXS3;>j_FHMv#ZS8>‘RYƖToѾw [&1By_`HjRa-ۅVUlWE c53w$&8,kIR{Q<Akupԝ}a!e W)W "f0GK+ )?ڹx{H3gqw@C|_{ Dh˜D=y GsԔe Y٬PUZ R"7zǂ59Q 3|b $Hv T5ќj*Weܛ#!ص #ŭoEkX M6Zn$^׍" PQ-'cBjx6ת9۹H laH87[tqAVtW=ߕT>R3IuC/VDe{4HM}W]mUbCK} ߻ryc_Kk?ODyb ]YlN4&yTUmmҭsF+Zb[0YBZ  8R19׳t=KMnM݃\LDk kkTsjkӴ-߷nԏ" 1 #BЋ|#1].eD>.dĺ&z Ap1;8OUf:LWE#ZjU> ѕrny*E}UZQ{ތf?H{D=9-!PE&,ҥ CTgQe.ӳdw \Ź 5cpY{Y#d 3i\Ǯ2Ԕ,ܪ)ZJ^q 673 lH^CH=6m"~Յi 6Uo=lDUIٞ,1obG+븉pg C߁?XWZ7\ɡgv^v͗}HPT"-{?Qf~9Vfȗ-G$ g(Nr Ն"}f密>z\[6nd"Ⱙd5aFg󃟿"mO;Ke |^85: !|+1[HT@7 c +0 &}2c@Am(ꊟl)gN\q̕~>q揟0N&qxEpi6!BTk0vN;05v-2\,&4i5bi 9lG@7:fj˃=MǦ:bY\uӱzp@vj9 !vϋ| ,sbkm_e@`P<q5Z'^;n/ER6Y%%;A=gfHQ:fm&;Uq]62pB0u*_~5%`J=]ui ^݃&Z5,Jݟ8<ꪃI\f?B ([\%"B-=D;ZݞuM [@G\V[,ۃ= Sׇ/~zőP籅JaX:|ZHy֯ğ( DΝhmH z"!q Y>cbٳ#5{7H@%HkdͬkcpWuiHD"b1ufE NAH6v ؙ,bO>P L5] wz! AOF(w)bt# K. hu:A,!%[$DHw鞃P#ᶋP8}~#O1:@&<F;"$O $ţ#t#K7" xM!!,ExBd5`MK`|ceF{A/9v)EB1`/ F  w+AtF=at0dv]듗 "2WSwL !"]GKE9^O!*=a0֋_ -voU69zorLq#vXF)Oÿp)gDaՙ3;[7gmW n;93V™,pz9& Y]8'+/B9{F@FX݂I 9x;u9$abd`=q [TC\DweHŅnX( a)ո0Yl+KB},Q}z"t7z6|/or:n/L5OueEԭ2HN_ %Q3~/뺽1r AʮcLi6} Fnݏ{Y)q=MSqѿ28}.N<˸7*2{O{׻LEş&o 䰧{Paϸo`,8I|niWouYa_y^>Cxqx ;{~^(f<4ZI3EBY)I^bB1pUog(Ō`\0A"S-^i.Y%)"Da9WswLic^ D%,ֹ T\Q`-yIeqEu\'hSgw;#n',pW>՚ЇYJVu&r]?VWmMpas4U= 9v<zDϱ%.lEj'}e9̳f3A7rY&=pbI\]ǵ)Cq +[6:IjwoΓσs}02Foc8n>y p6J@ٞUBQ4pIrƃ [n5hfm]Tp=Ϧ1I,lV{֣`%sqb#0l+'<\Y=@d*$tZ8jF92 3|>9`txshf466 TCoe7#B鈎 ӑÄ&>mO&{m&]>Ã7_!^!>+`FC({QFʧ1hTJٸ"#DWUAV[BQ7\u;ٗXߌXm{/Z#6pTC <4*FjSN@…]L& ǃVzօ(ZIj˽nơ }޿Os 5"gdӴ(_E 8?H^w?ঘgTʗlFS9ʮu ȗ+0s G|y+'鸈whmU؟^p>8.nB^ ";F]IW)$=I|o~Բo"FfcC7܋$Kȩ<8 m1#x6e)u`,UoPv. )=ib,|'>~|y 52 e~?+SU/f|0S5,)s5Cc\'(ðJ_x\{^s0VO)`umHz9VZQ羠a nF< f]᩼U  L÷ظ,Z]2g!ppjܧC/'}>gl`(}F_Q7Ad٥!jSዀ{"e.@c$HMQig_p)q S,i4CtY#kQW]7a0,˺pmznD,ts0pqDIxtmF[( $P{~IWpBn4JhBi*6@l'aUq]ٚk g`D>D[Xmis܋V֝|a2?xYsf19&֖AYP޺.&eϦHu9Φ @[9"Ӏk3B^1[C(]%nA(ݞPm3t9TN+j^NV3un=.[mA,۞V쑔-嫄-w#?PbPb{BnG* ]%݂Pw{BuI*zFHB-W ߞP7BGoAhJh<` BUB- '4܍𑄆k]%9oRB_Z<-nD:A1Chʌ<6j]8Jj`:@ΫD# M>,s\h;UTcazc]K8*,RqL`oHn!QU"'M΀ Ix._4w:;Cl$})L9ISt{I]<cgiLᡩF[ О|1m靂`#1qV?W9l hqs0728{:4ga?kpϹ|8ם(\]Q$)F`ocJ%dW eK)v̗ZZy'D\[ ˸d|Ʃ.ȲvG–e'rs+]hUt{Q scu#. _th"6\f;VKt NLACw0 Ns知&19cJaM3rsL35[Γ-{sgIA  pqֺ#{}jOVbt;߹[5_[t~z0&x5$W00,Ɏ9k_[;ׂvbfZc`cA c٬0CR1ˢے6Jb i:34O:*7C;8iq3PQ3,C,*x*AB 𺃄I7Hvpۺ\)"\׀zL H ?(MptyԕI^SJ#}3"_՚lH)hYmu5ZYgshQV*moE~WZ] 0&aʥ#s2pz(M³U6(#k+e B\D(T,jvV^uRB;^2e:uFj-oZ!Dǖex׋{c9|#P' C}8AാD~M7kR-|Jj[ENLGӚ.5Xx}HcVJˑ0"F=S^.{d],r]7 &=/&OH\& Ql̒50et(Ѳ(wj<ERbM-F7-+>089Q^Ǔ=T(MNxsnKeŨ~b:菱n 췇{ϴ!;(fn'{3:wFCig=8`ׂCH֝j}Ed!P)pEZrGGM#hV zb23XW)$ҴT)mEL1wq+mm4I,8>201CVYܲʮr}#_IZjQ3ǑqeGj}x FJc{ %vYkP?,Pgs{55yE(MWxLƉ61Dzpæic/wzhq}}qB]y)ukP.aMt̓?}/ۮ\FC(Çk7, [9Lڈz8ulOWYY'hǞd)u&A$*T+Њ#E O u űb0 y-i.82GIm gcCEɳ6pPh#:UιI' rtX!*\XcѐYFcT| E `xޮGs[@@{.*|ZLph"r]c9Q5)NTS<.8걾ZG OgN4X2m?=-܌5 U4pEe7KC1I+@B]YL.UKq  g#HA-+ꇯ]PDЕ't)r1hd0%S$G]"n&}b`>=oFcf3̅ bk֙@!Xs03z Q<>$o{c&BC `_ʙ{9>R҂Aj'c;0*m ERE `d^F)]t蓃wJ$'(I'ݍ낣PKͼbh &j2^)`&J/$ꂡARBX/]eE88ޮ遣δK!u*ysN 9c%Q_ѩWз`,"h5x[+m2 " 4݂/n;Rp@v7ўf9/ ׷=j?U9pnA0M>uض!vp.|#[0*L_H*{r('h]Yok+DQIu,7gMe$[IF*wY1^H1>ac/HnFNYPp"hpb&/iQ$O<U[`vU<}aPYȬH`7~VcKPbm񔅤SvzQ$kUk{ϥþDAA#QԡW-2Sf꤄-Ovv]_w~iIZ!6)_iZlc_l|O n{s#+ r JNǢ =.#Soڼ 2i!*kM9 IUdR*m2Wyuv΋w||څ~_MR^B`H} X?9m'Ț*/S`r}3xeoHm|yH=oaQad,%p! !sGI"+KMJ1 ! hEs>y$h 7I0CI(e eI'U|ꁣ=Z&o 4e]p~4]4.,C+T$Y ))3E/SEqC!ٸ<_~r-LٱJk, O9,-Z/ F%p^dZhZ{J.1\ Pc;k| AeG.N LYQs<8(z#t]T[CI4fm|>m넇9l.P=7 0@I,*R$a6Kb7wVU=E93n[ d%!Sb~SE -Ṙ"#P\Px.Kz<i0/.9Lg qs؍ ۼg9Riɬ掻9?x ܹKR%)Wb O~ ȋ0 ؉(xRoB Mg*1b>EIlu)_Je R bRqZ)<5hbzW 2{X۸blyP\(Ej)^-0Xֽ}\l̼u|/yG3̮߽!uާc]¦ D>ipGM샇ΥGb뎟dnͪs6%ɟrPt/Jk5 )#:os+ dXFx=4njb'݋`y^T!R9:ѱFXSѠ[^UED)N ֿ 4B q`bdWؖ@p~>,MY5P{DҲ_W='IzS6ث ]D!åI0JzD%~)<^wn.$ttIE댖kDs J#iERC_tJuAPBv_w|%zey4Ac |;wLjE4]8X3H4D}5M CsLxr)fѐ+_@gܻ ^z]m5Iw'#'v1®SGLM]pwVIYG5wҷVU8X8cLĠG"Z1&gm E079%? .~X +Gfr`~v34XVI9C&Hۣm0 P2\֞ݰjwX}QoA{ݱ<2Vi95юoL=Y~YI^7E?5J )pq_b;4X"N l.V8GO{e$m:Vs >-s)5~\3X@HzL<2:?[EgvيUUp(Pqb_n8x\{It'K^}bE?bG¨֖yWUs)\fD&~p`2#,z {qsV)E&7^.[(1raUg*uwy[,[T"z~那91z5C#tW(: EFES]~%{\T#AC%9!}}`bsvM%UC}]x.j!aJ?q^hAa<]6j%ϲMNF~lT b;`coIdžФ]`CY !lIcTq$H:Qeu _ԎcTcZwQsZ_wnwtc(J:vj3.!rrd;Vvc 6WD9rM5 K9Nŏ9>8 H}wyS2Ud]n$%;J!0C^6}S +\dvqht]&u[~qz/gihΓ7#ɉ&|47r >xύT6JqՈĘ<]wxu7 3 4R0 >XH5YMFi.~e;hk'9iψ'!4Ud 3izzHuF;q6)14R 8C\je Fnӏ3{zfo zk$383ۉC!51+) Q\PӶ}հWa'f֩f,`fŵx78fϧR@ B; AB'Y>΋KM}ۂ3p00a:K1籭Xb'#?;z@ B3q= ]d:? lb'޻ӏE 1}D;jU֋ _-k`}T'eR}H38Jy+Xఢ#9z߮3sf53׆ /q{tzw8 )hxm8f_cS_LSc&s2 U@2>  o# o 1k!،@\_|m1 Off/g=/phXӵP{@} G`Gr*s <ܽ{MaAL/\kG}tůb>ȕb;iE cK,7?Qn?Ɠq;e+׽4 N43FE@+w<{*s,9i%,Aix&m.ߤ8orV;9c@kB5 -Q&3b6;NZI6\EoW,س^?-DtX&:Z:9=niy5o^0 ;%![}I]wy5v^\F Q:rDƗTylX/w(T m U<ȫM{(2J\l25Fppz7q+L`enE1*9~ލR~\i.{GhGc7VZ:8؎A[8_SK-7Mz=36ɗUUviĩ_VXkcqGb*)D{E~ᮃ…T:TQ׆`l]=޸B(=ۓ^Rz@_>*y{k2f)R->k>G|~ &7ǎbwW52ٲ9V;M6PtT^ l .EQ`!:Erk]<jeZ]kfj#bfm0MKM:HRyrSTkw0Sxtյ^ч;!:% #!i`I* a)6s[zLNi iJ9X zGA`:$ t!4vza՛.-!IrH-!hÒʺSGAT^Õx{+N]lj$;bޯhk~ ېe#93E"[/hWШt]6Օlv$/q[Rj5W#) jUm`dGme--/=$H-C28t@FJ,FۖؗKCg^UNvx,j̃Z,/sWZC GV坥^6mr )'Wݯ9m"{")yR܂a/ Pg/8">,a&螼{ #Z2ܡ-@BYN;@-ѽQ_R0Qx箹aNHH[s뗣tQ-Dpv=?\I1pY}u-xi6k?DIt"᪾A1ʭg?O@K>EQGeY3Th-2F0*\y$sLŔ5ˈ5?Im'7 HBwYul=Fӑ֓?^2(|q0.Z 魑F)af3 $RΘ: / 2FbluRgYch;Q+h;#X7Tl*4#)J奇:r)J]%LR$RaZFk7[eXnGbg7O>]pkv͇wAry:Nn A2kj(HEm|O @dJE$F5rLc%j]`H"H&CN&½?ƍ/~LV[疡Z*>36Lfj)X4Nd #,=`쥳)L!pl m#e4EMl{B)-nfx\WH {G,ᰖ!eJjAVgLCv`o$e 7LQkMm/_* cLtJgQ?| `^?'?Raz)v̒|pEN9ߌ|v z"+}у4c)SׅWfk΃M.`i/p*azHkۙH0ě):5h86O{%6BltKΤC@E,#$hw#od88a.XS0>dؙ؇|Ɇ23"wp3οBb0\z _"OX2E3L;=:Fz͙MȔ'ߜٌ`Şxsf#r O9Lps2T!7g3F+A|ݜ2!oΐ!aC"R3*&1jGP7{KcUh ˄"ňEs*ޥZ< nB|:>]_ YKNLRrF@M'W3`n{hiW/ aja%SjWp78TxZ#Qh0kߕkq/No,~ש)Cp\:<Ӕ3WΥJYQ%%W8AM{q x}~"Go#2Ꙏ.mּ݊0_#'7#=PI[-N9Ti,2֏d6%(3o OwK), XFubKOSJ:RZFh]|@|h>'֟I3RLN-4,e2QVN2 OrH C2 S hT( JS ̃_*q(E )%D)堼- Pzj@,F 1:-c bσ:Z!|\ꖀ ř#`Skʇ oQ(*! @ɬ~J~4m'Y[˽(mRg6QG5`Z()LKB&o'zWeߗ( ;}]w펾r莾.m\簶S1-RppЩxJ2𛕲Vq0Uiz3-;8+˺5bmt;JHܩ$,ItP;k%3Wh5'SBjeWV+k[ d,}X$ "9Ǫ5%@#ah-EUQ+VPi|OQ^WO*ecuv*ҔS'S2 Db|Fhph3jYq7? J%1 !\R$vGI!(4XF3ȭb H! q($3,ĨT<Ą$Lv@T=_7!܌`.܌\&rBY; +mHe $@ `qg`1b H'J45`@詪 !ch܂lBl8T !#&Z%häCL->*t8TF`#GS6bδaQ1yK):8A1XDR K䠋șhZKI 8^pqhPKio0*C0Us2bH0J'8JM%$̵)'LJ[K4h eD  g pp"pn.0,xGR}zn- ӣu@>ZMƈD. È+{\1*O=sTah#& 1i\F%IA*Q)hM[+<ďIHbTi#O%1aT \/m HIG A-HO\8L9"epL}#%Ts܄Bا0썯Nrԗ;Q[ckwq@ ]~Pڽ 3UA8d'YY0RI'Y)'8h>`HV=׵mj @Is]*MRm; k\萬HcyEXŐ!!A"UlܷJ0FՃp20C8`AU^E*k]Z&1--(#?uɤF`t2ľӎzM$uHUVmBļYb,Ǵ 5jͱ&RvVkA`B㥈BXH|6/)} ܟ#9 iV,dkb6JH,3%9@M\ކ u䦪u0(AEF%N16V#(؈`yf2#m`K8a}N k )Fъf=+ɎcΣ>+\+";}sg`%dO{pmyyMrrxqۮK*"@X0 !XKIޗ5L W91XȍHN !:lIY "!κ`VtgifdbݟS$w80% !KP2c TsQ`fZc.>$! X8oCdP$#h-eHeըꬊt#2GB3y寛+:QK6@Ǭ Dh0#^9ؔ b6D3M{ɷ`J(= vDhC*\`FI=J( "f|1㐇fR=}J̭1Zof9 p#bO@Jv⪫Fd $@}&#TEz>60 yj9Wx})Ť3NS vV"XޘV"WP1:මS+SL5I֧_kK;zD2-ތɶJB*5!c{@`{ӈ`^Q،ܮO#jbjPO 6hMp_Y@+suM3ա&Ӫm丫 )ϣ4"XW6#wtOf AkWo?4#B,;/Fӌ\N{cw1xe#I#9C"X))tRg$%tt|Sa>hHŀS8'{ @p 8SJp5* T&*MNS-!,>>wDRaYR㳤ify9'xͅtG4wfJFdr;mF0rCr;m@-G]!.HdD+MZ1E{Yo| G=XHѐ2U{yHOǀ+~{CmAiđ]unB¤Ґ ɕ.6hDPw?0 N:6B̔&drD&ތ\E_PkoG I!D5j"9(F+.jX>( {dX7 3H/} S6z4}q,뛟![qZ_^nɫ%qp] euPDPɰp(΁ο\&FME+؜68y`ۃ7~NDﭴw*A5[S/Nj2<\r Jgk+XB#;BHSCH a{ENx*r"Q%X?E=Xe)ACHr/57rrZ۝,:t0ƏxݠN 2RXY yP`;A3h]}#25!P(6w}_Ny-T?yK|shŮ9NwHԴR;ّ|#vNސ V㊞ոSP 0vǟCn,D 0tw1)˷/_9=/RεЯ_7>z> r0[Ƣc\Gag3H&SpUn~iܒsc̱|1dž##^|$(3dJu @?ŨID1mC7ZͮHK/zl\42)xdfwdZgȾ{*+rbcJi5>uQ2)Ϡ2Ra2V9 ^V@QN)93ZdźRgҳ˦;kHE>ג)E(j o~E6T:,?Dml5y LqYhVurb#VJ}@O \'+沌ON+7$8SgnhC?GT?~7 ",f0( qtPyo9$I5nhCĊ?)t"&{`ees7k'>`e|QuG=L9 R2 &W \k6`ǼLgX : D*m7Q$8U,+ɷ]^YH ݬ_Plsvfh}X\Y|GA%@ U*g er9 )&/ߤe-<, Ьg>N(uH/Sϗa7S_wbӦ^MA3Gݼ]oWu/8ff2/Gfn[7ŗ~bRZw6Aȼ\YJ/7eCd`d77PtIإA؜.,46@3WSw_\.[cR;H 6^ wE%Kn)yrCFD|6j@EIiG| SYGLEqywvPt9E\ x!yd x.Ee 2ieoݟ v}7?O>N&OAn/KFn*Zc& H~um^rRʫ_|[ŊqX?\٪^ o|ȍt-ϧL1ɵqb\s얳*T~ή\l99hl~M]%k6Ygܺnj=S#&qv ^n-(f_5 3.SC-7S^SG*|{$6R#nbmN9@dj3 [6ouA$ZC:glqX3A0ɵ r B($1f4j|sWL#,8ZA〸 cCs GJǤwМ~/2|~[hU<:ͶJUV`5Fg$)^Y|)_1MƎ$纚DZAIn`[)McV҂t'ytXc kkT(DkkdP9Lł ؘR^8hKvCǢWwi>,*?9d֞i/ܫO7Si XQ;EG1ǘ !(f!r 9L0o[m$,lE؄ A2 c*5(K)µDX/5^#ɹOWzV dn!ͳ7]e4Q2r9 Ӟ vGaO$t8O?閭MWes8:fZP,Q Ssd\jâb{l |p!_P)ɵHwD@LϤd,wb,"CdȽx5B@q?I(g+buԟkb``!K*rr&2J-g`Qz!I!`{VN?-}X+0BaU" Qk50pT4 /Eِ:0Y ˭ @-BH0q$XB GLJ Gwm=rV dY $<5"Kh43d=ꮮ95U]R!F>E?ߦo2Y]/s|yX?vuWT;GYͩ:R%VzJx2Ɂ[[b9/\Q8mE*,2$urܩ,2x|;YmIĎǒ+nSÇxG9莵Xoa^sn}\ /VLA;h{?:;ggYQ@ b%|pT*AO,K%nD3 Tt|O~'Ez"U*źO%үrlsS$Xx7^_K BvpǫYY&_S~V0ls ۛ^X-y@u Uov%)Nk˵$YnrmJiDB]jj#~R(U4/ 0\1JJcu?dbْG-u7Y{@oo>X_,Zԯo夓 kwߝ.)XVʌf6yp6#77A5=#J{_%ˏ)4&93KS҆.As%ծ/.~+n߿<*坟Ϡϕ 4^J ^zq7Wcz5l\E߽*׎jJJoi8KKe(MA:x`X^_^rh,߷D!~~ʚ:K(dm$$~mHchG>"?c?in)C.A!$4:/=ac>j0G>0U*3y"2:O(ZG0Q8LƌԵeigQVI hLd$Lhr8јƑ= եo"Tz$cQV,)#h?ݥګuq;7ŏ77ZV-V*?GBw9^m^(fzGQ];ھvsv2=)yʸ-;8rlxa YXKf &H!qŢVе ߗ=Y櫰.+o&BC9 B՞EY:'GV{`-* W 21VU8 3n2/’I9Z(#2ڔ x0%o.8Vӑsʼn) 3"9z I)I0ƠbkG>vGD5D"92c4R-]' Ĉͽ@ Gh{Qcހ WmhF'%d)]ۊ6mE( fӁ˛xN)Q- $:zn]Ca"cr ȇ\<%KK؉%VITPZ:9.^1ʜ1ǜmJ΍g1˕T5'e%}n#p\IuẎ ȇqVR6Uwt$H_)`{S)&ۂ-I zSZ9>#nZQl>4gkJ(۵JYKTR̢SP+=v*ә{:v؍>qк Lt5ֹuO}Wkh)kOZQ1GGy>z;1ۉ"+8eIOg=^+D[p+fb7EV8W n&|,@@PB /:Ռ"J8="yNRAJ5ȂR"IKEE|d.| s .͇R_!hܬ1:.JxaHAA~l"<sFkmJ4-AJtMR[m4(L1o׻O7x ֖Qysu2*tmAh=GmAѴϠȇ|Df04`,&aR^YhMNx' ˙bI&wqCdrwz8&BK $cfs5/<3rG:1 POBB Ĭll#'t9gAnԄ.H?fzgWFg_Z==" &O\ J `:zu3zE>""A5=a̫1ǜa˦=|t߃N'DU_18|6@@b!sMw|(˅svg2~eʸӒXW&t[g;C>z:# Q0#EsW%Z#m̧'^MM>Իcm^ȇXDr-*ꍭ1ր!ƊjIr^cX>륵FTV3DY +O:z bf@Kdݘu]qк?Z5pSk:~D[+ A$*Txj163QC˧)$kq(.qߘ{"G}Yޙkc} B L(Sʡ>I?xmB|4 \SZr=P)zmߝ4EmA'Nr(?Cy:qaTͭ/ݮ*Y&L8 Nv} xtT3vk!dIOP7j)0]~i(has.;w^G(3v(kIlBKe 1\:rk|6Fn>SqCgWS %:PCl4P u2a%h/[< Oxkz|{oRpU'qcm#uO<&Sm)< Tq"%K%3} 1C4#m%?w2sOЫx|`WĠ Ơ`^j'y5cHW XAQMIiXG,-p  kzPo1{"(n~܃gkb_ܧ]`@D9+2˽"iVZ 1G"h>(,7S<"jwC(A̳(?YIfX돪w\}Ky7QZwM[v>ڞlzw;q4P@u 3 gƌ;['$QƐ K7sQ1Q(,wt6$w }Q+q_I Sn(>5i;aqy?Z7:b!$8ML x\hPF\zbl}!Y$sN*M'| bf,Mm@RОiA\NBȃ.ŲaE|4r.<ѫ*t\O[ 6LOŀQV36}YnKyУ!Jc6N^MTY1;0rh0aA#FEa)-LZ04PQٌBLB%mQ;"zGs &Tўvs$ۯ N:𵍐gI|6PMޤawPn*?Vܻ+AV9~LsFDžUԓڷ/qa:==*fzPīigm2otÌ*%$FӉē|t[E=yz2N17ev*z9ķ;Q1vGebGALU!m~B3v0ĝOI'S޽M{t6/h2s۸9r> KeNLe`&J*noaZ)ܤ!~;\tǾPm^ qgIx"XQvkYػ=r㺑Me1BN)3zz+R HQmy% }X\K.܉RBO?Y?PΚdp7ONg}b )Os:0rB3 nwgΕŒqV^I} 2jXԵ 6o8Hٕ)bOq˯vyUf_8ٴiVS0/&W}4, @d:4~2dQM6 hҌ_sꠊQep7u)kj1#˙bQȑRJw3H axo< dLJ9"J\-^iNNߙm3>:iyB&<:UUbx)KX%mΑ<ĀeV|)zI?#ht.CREU1֠h)FޙBYytFV0 Fesyzoo(+Ҕ]L"m X^,О-ϣcr'uftt!HQ@0řp:rƞ]+qUR\>jᾓ"̌fٱ'ңU4jbGCe5˪r mqJ׆+syU,^(8b2zÁ!m0O @6و\= \*C栢r|.lisa;,8ڇ |(#PӺQE4xm zoTJE/e,]Sl%7+_#htH+{N0W2>v~Շ@}$U=:9*& D8: /%cWǷ.1\%aՕL$X;ϳ9|@bC$>l|[/<}V`#Z ,IqTK%{@K^%qK-&頃F2f7MԡTB4 ~#j+i5ъ ?3嫡1Jtِٴ GjGRVED_%94rhk(Z naԾOX[F;HQ/?"Kr,bG m/vPx}3wܮ}RJ:h%3 e3P(1&("e,ce8ajPCEV4'[$8PJ:^zzt;$vH8TJ&JLWHKoƊydILRIغ$jV4I%Y=UiAi@c-CALZʝ2P0ϵ(ͳx}ә-fT6b1.")t{Uo:W?&خJX5<@r0kQ8X8ƭ%3"B,+╩2NKiwyKcwN p PG죴N(n5pa>OOT,(=AYۜh7 '*wΌʠ>3';-/UɉYأ6f!wHz5񔏻F]y&VcgW==L ͔:ȷjWHMQeItPBentH9D v1Q1wXb8t.>.9$saMvd%;8;Ѭ{K-XkF*-֞ ݖ!7.w``d=ҁp;C@M&JL:O-)r!^B|K|)7MgٹpśI'>/_I}VMCo9)8$(SHGp h7<^{/$FcIy(c ź^ v/[_yӜcJ)f?Ku̒&Xtcz1JL4~hV~z9pJC.kBʕLE:h tK|w0ޕڛ9htS$%#Ifw/OAㇼ8ORicS2+Kψ V0a%qz?ՋU EiVY7C< VAٙx3d#0jD+gsCsJ* 9FEfsUDQtd1挿.0D`QR})/9g-sS=9fLW4 *a5B_*Sev\D9 #b:H'%,HGC&"eaC3QvCҌ9:@r4}d,iߓ!YԳVuTdno1OuɐˁIC%坽oXv[ ʹ-S0zL: (E.2 TX:=өQGfBP4"3GTKq){;b5μ-wRit:hDf׍m굹QmyƗ[\R#('`9 )<,RXe< b|K5W⸜pN\Rb\N)9T)/q0ܟ֞&x Kz,=ݍ_meu@Me!'\5kh[snLgi\_}|EQQY(-%eWye:#92oEަ$#N+,jsdltuc!W\42GQYKR܅ ծWphwgB%f`[. POG~;i$O]KY{MCôx&oO=t_Tϰsu0&Qκ~/Jk%/2M)B̗pga$҈=;*`Hj8ڻd"sɌ,4گ\dć GD1V]2jK̹cs= %e8z,ԭE!2oNj/Qj/2 ɑtע(eݮ}-\E_Rd:I"2kpb&@=E 2[bÕ9$ Qqcr|3*FddWHOI10l|{=2C*#A#2s8Pzl%L_#؇$xFf1bA5qGDQ>a ZGWނfJ`~V+V5_}~?]:W? )#7%]=QrPİt5j,&f2&a&zaϓ_><0_4 r?MmT'}ϟ>M^O<՗ٴ^{'ɦXc$O:,IpL%ǵkyyə<{nh '^Kv"p]XZ p.}Y{N"Eͯ]8|q>?$@kmwÿO LGXwY{xjA%J”TyC^J w'-Z~OT/Dk,D5et s[ba'VpXϽRcIՏPey x+S LQ[[, 4S+ 6 5~2!I"LHL!9$:xz5"h]. {Q¨凪>r?޸;D{81HYn_C$9O1LVYJCqЅ%ZZK兤pʫP6:% N(. GP'&V:Ew6#i4\B{ w'/pHy¹>&C%: d ;Sd"G63[p$љ i-92s8f+Z[@jzo'l^#2ulbR^滍WCw|?3nC_`H0i=E{eKUrmm% 8xd{dM j.חStP[a>T 4ܦZczkC |Ȟ3 ÄԴw7[G!~- M!s7KD7УGNGHL=D}F̅^KLPݏJ_D7[b$JtG%QU/y۸俊`p^ ,?fwVLKHgf~M"iʲfG*vQU]+f쐩=ḡ49xGkB0&X9cotڂ؇ {mI 3'r'^_?﻽}=i½&xOԾؘNIs3g6>ufS&2Upoq|bLѸl) q_, jvώ{5[f.?]rw&w_3FJhq.e4D RT=vQNzV+6[ׯ>bފ|Nfk1|~ZN7P=yS!G; ..*l{խR? dn c KAy, 8~Ng4Ke2,1XJ':t~!f<<3R%%@;nvrw>߀ҖzB.bL*.n `fkΫWmĘDy2qݫkmeOC[XCWd@@ Hِln8&xOl-P0 JQx3]J{YOgiA~&eW]7kh(%#ɈtGCHeBƄ} /I K SfUB{R +wV"M}}%W J 6 !|:wpSZ$cWlL;7Z~Ǹ=ŨQʒ%AzbQa̤9ͺgl1B$5ijiwe >;T& 0 AJ #KA6Z9b<|.! Nu _0M*Fb7 ĕ(.NeyUƄw^s zo@o.ӇIIuT`+j,Pˤ)Cˤ>>HC r%_rH.bOy\oLq1CܛPzPmn }q#iדbT#ƎV~44fTg hK(l\I4jxBP,񨴨Zކ y1C^ !ޥ )/U#-0ه7B=h߈7KMT|88+aQHoiAzb^[̐D>;V1鵨XBhoeP`sN.<.F;F{ISk澋~n˲AS@wr'u_T'+.n龖q6ecKK$ݧ6vo Ž GMK2Np}ܕs2YvG, v42)}|JA&0(1n! *y,B}"n> ԩx_Ԩ⟭bu#DP 5;?.f7u[F EiU|YbF14{+O87D;&57оjK]Aj],sHʼno=wCWhшƵj edjrF,^IU _T(ib!p ׮Knt]݉k؎r*F((`3Z܂!-uq1&8 D30%3,7uE: X)o.>?V P[~}| B wnĘ 5:{Vm8׮Nh/Cm7yζl3h;݊URXqPn ˇU|U!$sQ/ϧ~ur Մ2ϴ.XV `md2LD2 xIHN.QCF,"aYN3x^NK#ad1t[hŘb"%PW-u%=X) 8@ݕ֨$6,O4r𺐚F(Y1L0j*U!5bL2\$8B7;噃hg)1FM[9svU{EݠK>/7Hn wJE8%8rZS+ 4b&NPF*JaTG_yA 01k_#hb'rBGMWHh[qIҥ#9f&?)m'I]ϖ˪һG'Rפs"T O~{1kow}}栻Qw{ROE+fW4Hh$1&Ch L6!%U.*/;Նg@TfǑVWKZ]싦TSʨ~Q *ˢ<2^,8;{n%n9+gˏlSDZ3"UeWq\쩽[ʫDRBEEE?#PₒYِy!>D#xzf̀k1ED'7arE;P S@F'sgUiQ` BPl6up]wmKzT':Rg6 Ihxy Fњ )'mw拸E#( >'~x)o/L)JzA`LH-:edOLjEEi ]nZ8_p:6Ř,TiiƑԆ!C679"Y&lT8AZYzF4* ًtzΎTGT}(č(#u> V娜_/!{vi@}BsN 'RTM+ u 7.4!H3x})̨S|?^}-ur51?qZRiUYϰFW]FVOz7])|{D;../0,s\8" "}dpEV"St+60r Awo?C ĥ\3y̸5σ2tah~6zЃm3uݚྒྷR3|㹻r >w;# nPf3pfoxⱞfE(-V%/f0m!L`&^*ψTK*jM{VE:XeWGoR*߲^$W&Lj=qkj1#wsui$bg?|^VHް^Cz wxHL" LunsM@?oS&a$ !cCbLBbRȄWA/KZN9SL&,U.g\"ܫ` 0x.u ƭX:{n߻9^c U(S Δ禾¤o2!I`mH= xoRdLe E>y쬿MҲBC,W :?ڥq'bLQ"9^U:N_D:}YO)[$C >sSn!>:}H"zs\wln@bO]͛L?NK%,8'6Ƣd[sgޫ B|Yœ/>t(L "hL,h8Gwν 9Nݹ KbL`pDB3 V[,'^<Ũ:߄YS7YK?x Ymі3_iONnV:O[cq{Pk@2kGU^v-J 9m;9ӾL&~|p6 eͲP$lEѾrWoVCquyYV%{DvuzXhc!kF ^, xxg@g ,rX>X51IH Qr!Hs" GD:Jt܇YrXVsh?)F bY%w%) d|I4[,X_؄lҴ/wU~yYYrQA?&>54N[nj'ֳΒeY0㑜=ƢD)pݺ@6? ۭi'{0u@]ڮ95u-:szIA #Ho}A%ý }a!& =TH #$Bh)poϮfMW蟻t+̙nj]nnHY.rd' ?q1I󌲘M+i\8]'7_b4kSͱMBa3TY%w |DA)[ CQGt|y-A^/ɸ,7E%ݷu4) +_`7 hӻ$E0ȇl#~$p srp0 r1"F !󔵎q28w)9b(w5`D);pI"r&4&f1z 2}Au}9w(%E9d(5WAYܬmzFZq=RlB%H" ˬ[ׅ[s2 ^>zTB33%ɻr+ @ f(E x愔8Z7PMә0*̕via-b`Bb =^=iWlse谘PB T. -Wx)Df2tX~e\ dlvr˱-6 [U%ޑ1JV6u;b%p `hKhU dn8wˏ6^gn<$5%2)&+AgC[ ̫0\x a^2EH 5}ߊ(n %sHoUÇgkz oE͌dt6W:xrwgK>?wb@1}$ky36}=J[* b.mT )1:68֢yqGE3|_IV g)9j my(:+賭=pCo=+=28Ejl".)OmwbK6 T+}:VWpm$'6eQ5Rqo%4d"'x XF:,2.f8/>sZSaxgk#`)4>+Pd0,3Qq |2EWgբ3AvVF 7=X[968 ]HgX$q5uA)k f.O%)W&mP&i6sXd%PU<9y,K${觓ṡ[jCeiaw vuP}*JbC\n m)R>YCǚ8"&''NZ}xP-/T i*$-:26A蝗WY8NxT0E$]` 3:Cm N;މOT! ^+9%HA:*S!Y,Bޏ :}RI6G !.0b#ʡ]q`gQ6:Wq&^la)~8(6*zP j>f2NW@k>2p9Pxe8r]eʠGMӃQP8^-AzF.oɭ ˂Ni){J&/hLE@bYk=J t'3eO݄DGepتzvG+Sc#F,5* l>0"1:?"r"*[) 8dD̯cn aozeV~S/RB %ܰR_c 5zK*1psPxrG9t"JY ѧ/%Wԃ/ϗbͫb*7{ҡ¥dBt>U$H4Q ik@#:*C)"(96 ,&؎dsu,hoL.$U;EGTFf (GVVL!`n4"!l`>A=HZGep&ܬ\د}g(Vx20셍k`'dpY $_ϖOQdmͧЂV۾ ]Jދ YB"P rtPM1dQ%rpO2=z^G'cQNy=32ݾS>68'3d37kJ6>G6-7 Hq2q:28r.Q#Φ=<68ÓmU<28:׋ITT`>XԋVtTx3XZ>!j#u h=MyAGtTgF[Skٕ_*:ZUe8&ME?ϰ(/{Oze@{)c"P(̚_f\L=n~?kcx,'Ĥ`nÛf u^L_4,ZDp)z OY۶-߇ WI}]*ݷY,,@d=TI'5 eF,vyY*A ˽좣'z&'DU 6rJ$aUOQy206>twE ._pWk텽h-xfBjIf7w@/W ,Lݳ_Xwu_t(NO_O?u޶KtW'#(p?W%2+҆IV78цog?ffִv3re 5 N'b8vK'؇ lWk=?wo|`~i.l>y;D FN—NqaoC7|onAnYĺ;NF7ܸ5 ݠ'q..gy8ԃ6cɏI:۵_)gJ\Q$v̿/PHUŮ*ņÞE J 3O&yb9$_=:<˞;Ugպ>0/6k!d&^ҫ7 ŗNPk^:>ȧiPT*{jj^\ʇ{d#n`ր#E+r}u7m54nEW5.dMHЈsTr*E%VH9!HD.̙:lX(Q=[H\ԶiN5S5ٿb٨zh{l6*s(-WUҟj_jo|+SZ)_9˦\{9$mn|x; Rd,x*F#PÌ3l,Fxcj7pN_iQ Y\BT+Q:u:"@ "ΈRR2k jw]B8[v[.I8y9-:Ȯt)*GFleg$VPvwlbK{rSjp{7p`ƙ`p` BQkq$m`X.WOt`֝1‹TAc吔6F C*W< sZG5eX(+wg[m;|8*A%G{Fpˏ\O]`w,ٝZ(OH Q2XRt$/*djT4؎ {v !cjZ)5UބA0mW5z mF˂뾹ye@DԀ3T6]{a+˛I3YۯK#1u8JXĚs51qghdPVي'a l_KZ رx%tox[f,nN9{M'4Vo2GFk"%BR㤂C1^PJg (h2ݜ;)K.iTq~}W>]f]dy}~qkؠ{ΖVA5,is#yiz3-R|; ]ni_Rj)X-.' { (׭{yf9`{o|JJY5%yrqŴ%7tW#@xxv wN B Zq uYL0{iWv^e8g[2&:Dd10LTvU!I?0#3`?0)(zHH]N/3^w"E 3;B핰~;&.řѺ^'_vܴ/2|"`f[l˙m9-g̶ٖ3ef[μ~m9-ٖ3rf[l˙m9-g̶ٖ3rf[l˙ms(Lm9Wg̶ٖ3rf[lY3rf[Φ"-3KU*(-,-g̶ٖ3rf[l= ̶o[m9-g̶ٖ3uEFK!$AD#By6Z3m&ƒ!lo\,Hh6gޠL({~8xjDma #oX>_̒})n?J{tw?QARˏI? L4ER]Χ Sj)5Tn.34ƿ)3 = :_pk#rd6??~TbGt5%svFk'?D5Y=FS6¿9$ˤj֓rq1c}JA٧}JA٧rPVH BpJǰ y3&&#(e(CDhH4m'j;NAKۅ'*)Fejc6׿~!"®-?%8Y h.xqY>|K{y߿ݶ{4EP!x%J;9 ~{{l 9~nTk6-IEhFe{Dp&.#\61Z'"a43} 7zֻh C;t/K=>Ob(,ȽgڃX[ۼh6VOܛ9)؞E!.5O>7//?eJM6|h`26NW9kgsI : h0H8@kA#$ jgҳlBd8[-ۥ8|0+Ρi v) 3v,C@Q:)tI11ѯ7Y"e!2;~,Ыi* 9+P*]oP"̰xPi [" {?X2i2U)Tg[۫Ar(  - $czҪ5zΚOHQr;,*eHwʹfT>%6/L|$…:va ąa;݃):2EᄰGz.C.7lsjp2 vv'wqZ=`̙:9-kLy;MUM `7PquvvrVUUPͯwt#LL8d;֫4赮e_^/)/XD|/zٟc AX2cgv̎ٱ3;vfؙ4cg̎ٱؙ;cgv̎ٱ3;vfؙ;cgv̎IogFݳc׵DnmF R3bԌgr  .7c9ޏ$(&hKLJ @10"l$.xA^ { !)Oqʲc BE&:ͤݐzk72|`D;|'궇*3 0O&Pv"jh4%nV(_ߒk&~+Wrf`?(o|h|e~0Xx,`C{PD(F+3\̻ǥn8ѝ&!*ή[} ;b{ PqxܯAYς$gOkl--0aDP.qm89K=fh6P\MS ?rّRYg]9X~_3]R1P25^@mbYр2idT 8D#ĨQ ;ѵkqYjһT?7]S_ El$'LQ; 1dSéX9"ZRLBval" mLl0Eghxyh9XZ}mT m\`ڈlк {b:utLI!4T(A*pH+#A6YLQ]XM2kZִi}jڠ5H#ŭUuІcz[v{u_lYtsnn]wi5EGkB FUD% P.ht>jGPqѝ,/~*h<@K<21,^ &ieՔE%Ts0ۈx50PoJ9ou@/sû9hvKdX %Cu+evS[AChr\5` 82sAV Ȧ͹ & $b5o˻4nP)*B ApZi4HĨw)'u}""nrΏv?=Nh:[Mf6MnU^Pr|qAhp'K"lүtyZs7=pNCnLUl͐iZ@+ڰUg_}xq￲rg~g4\=͛]lj _9eSQhYuU"uUs(V) -M6 _mNUd`)Sb2Sz;~cvkQ_cfFSZž%^8patR>R[Qޤ|0Vg9H6`,CY 9n~8`čaɏԩ[1ox $vS9ľ\Sٷ,;–~iŤOQAZND\?WGD@rN^X(w$t}§~\=z4vÍR95s\iZ~{qN>qjEq>>hBBy72`Abqٱ$E%\骟 iJ+/}֨z2KbEB2M {%x'<b`bfBCW]{!-e,eWb%߼ X!K i.Z {Zũxz[eV%d]^Z~/ &ׅèHl!ŶEWL7FEI|a,pasAGl` uwnY+b+;ԫDn4ӤR␫i.6b_rE~ M;fx)5kC;]2;(iL BXkR !OWڦ֛|Qw9}L.W̌CvD7*" |-Pٝ6HaBPB)L0ݎHXI؄ A2 c*Q ;r-1KȳV ď_h(7 %`6^-h*ܴ-!Iz sf6{*^ 1fi>}9w/[8on9 Ӂ`fZP,Q S6bδaQ1H/:P]*\C3|Lu;"E RrHTЌ;$,"0 (OjgpcP4f~6cE&,fP Vh!Tۨ", g"rF EzpIh$?2pY5bHV`81 y@Vc$F9քJ 1^(D6'0c*-NlM&DBXK)j!v$f8T !#&Z%h&ٚۚzq҆2蚦y0`_B]AxOnؽAoXEgˠ!uK 1mvN SC,1DDwHm=i:({X9CMgqͶC=ws[msCsCf^];\mv*zl=Z[t:zxϡϡ@MECǟCCQ?>'7Cbhobͻjaoxz'%zzC.9aZn՞n ;VA2y˳N/)@\/Qu l]I^drS꾃 Yh#}жВ,Ťy+$ 9>]\o/T~d<]L9)h&x92J#"( -J)OFE@ؗ6Kg|cw_o߽x;:F|ObėȽNs(yx?BnCd/@Gx7J,jʅ/vQ}|6F {dzW5A5}SqԲi|DO"|nYOa[\⮙.n넇! b <* ?s~]`TZʩRYH.Ս]vY'5 ΍|vS1k䳉SgF4Mm}l˃oIEoG*5iRdE}]Q0+74ny/^XUߌXQ .ޝ{/mN.y±fo_-vn(Q)7Ggvh=P֕vй#w)U@Fìd<[ ﳗ*yo-j#~KW5.C=U]&^l6^X8ؙ0p% &şW͏&)w_P x5xm\CgXs ' SbS[)`b`Ԣ! hC+ŜZʩgZjq%좟|{3ӥ#z!Uxd!R&R/5eDDL ` Xy2&"mI w?ᒫ21MY%q4wgs3eU2>v1VYiAc,a(3*h2:ڀmT 5`ѡN WWcUoĨTjSDdmWd~ b 3THCKK)FDO1=3  /w+<%T8 ySN)RҔnwŜ(a'ݷ {BS9/gvxAhsoFɇlOlSesOLWRQ׶]Ȗ$E*:8[FD_ױc5bL"NN0 ¦KRqXt c`y Œ9K˭.#2Sg*g.4\(wp)BSϘ8P=l:Vߑz4‚Sg(w$kո w .Wt 0߫Z3_J/ekUb[^;xĚ!yQva\>y?;]^rÜ%Q [ZcI"kC ZWE[]$gd+څcl\t9v#xm;fScP'gFkhv50a/\YҖJM`F1K)`)$$Cg 'Z%@1T)b+t@HmOjRz buZ8˽ ?dzm}5+5k M)?ԚIWx{) tm5C5ɸ8]ʬxx7r nfm?vrOrmI %ȗ:<9`9M=VR s|dVgt+vKntlt4# OxVOL#aγJL(.?Mݺ2 hUeLrwYh|/ b&~E=mXw`zȾBJ S §b8t>|!( =Iv8TtvLr^Ѥ{iW*FIM&dR^R~%Q łqUƾǓ{)j3aaCbo$]kM_װ&T9WsC .#pA\A|[sӻpr[rMV6p 7S/@I'n?FUm&r~2fu]y?]j_{zF^as.U\X ձNݩxԔ$OssF"T]fzxrE)aħR㟍GoLyFxpS )rq`*.ܿ)UqVSq}:2/?} ר*UC?a_K{7W7j[(WH`[&z,op^Iܨ7,!Sg wH1KQ680Σ`քLcMd"-V:͸F\O^բk`؉~`YwMp&KsV3I?z)W=jǯ޿;4 1S?P"gc#bdT֘ *Gj'ٲ9i8 !H{E%F Ry[Ԙ-؉c'?\߅"jҟCUC O?m|an{Vϒ^zlNZ 8盒/g0/RfAE Ɵ:!of֬vG.>u=5I{:?♅lWyd)y$1acPJRsڔ=N)4=dv:{tn?M>Xk5ZMVk5ZMV|j4ˆLBP&!$ę8gLBIaB$eLBO8gLB|P0$|8#LBI3^$ę8?FJFof8+v話$LgSYؿ¨$ '?M&@bjWY+q1 5.TE >=n+7"iڼ_  y .@jk-)}`}jmecj֕UdUsHr>BfcǓŷ} a2/m`':r5:p֐{͏s՜oׅkNh~fOmӁ/V37N`XkskKwzKdT l`'1Vywx3OUuiǛwGR}T&xbθݙS@3_9-DhkKl57vir. x"0?IGgjxn)%2^%5N(!Rd$!9!ArʜQ,gBS1Jq`\8sx1& .(CR08Ɯ7^5''.@VJ^&~9d.?M$=f__Y^sg2"i(Z_}GAlS.ز>[ ΑA^(舉90 P<$p 8ȜKDpcyxB%o:|"3UJMb2xdJ "TDiXtD}z1> cUЕM?yZ]-B(|ھZhhZkt?79\۝Ó v-G GMD5Ncp0uK8a)5B'-GB%_ݿu[O`/Fٷ]j8~X0-4rp޲n9PMFz%nz:`4&ZOԖ)yvhfqrpE/.`O kS`t_fs|nSI1In=F'7O_u8 ykqy5[==O'ݾZj'wYv\^Z4W识ݳ< vYr3=᳻zpyy5˵0y;>$LO 0]^d6Q Xq(qiG( '!H$-|I90?!vfpsĕ\kѹf7W ;Swc|(ذtCmnټ̾~)nkB h-kpDDzTH 8z^5< S:ojv`|XmzUP>o3ߋބ&ùq꿙_ Ft};;{*Kȕ +CHd|I3][z#w Ҋ2@ωڗe[,YOU{uhQ=IR9!1BA^ ƠSV dm-,keqv$=?tSZ>lyN%r~SfslT^r'yP[$MF&J{`UiXRY TXL Wֳ@B^Hւօ$Fem {2ؠcHnj&KA3bTiʊG%Q,j\1y^B6-M4mд.5ցR&Ŗgϲ*5Նtv7Lu1+MQ]z ʺQ|Ou^$Jǟ|<h2߄7Z[/|1#1+zdzո˸kE8Y+оQal٠7޼K!'&}\ń q!&(ŨUbT KAKQg4YN/3/+|粹ozaU!28}yAroG|mg ^A3h栘bbEփ6+֠gVРOdywaj鶫We'/&%NSB'F?^~ރMɖwͲ w{ğ*9Ji„,tY@1fHoWKV9EzD"*W\_Q `+ey*F;b.-ˬI|~B'n;ZpySu@~LHXjģe1j$$Ch¸ /D~K`҇{C]o%`+.BR[j7/藿İ4~ީ9J12!mԦⷓ}Yw<|!>1n&L'0wwI(-?= JRQ7?noN<_@FK\uperho+)9 _bG >M.KȽ&>}:Q&J淲W~pILG?t_kft6Lugld: Vl`wĀ'1Ow ^32 #T5 4eyzrUgQۀLzJ$_vkPFEN+IQA5YhN@˪4:NMJ3$S-r_Wu6Z<ð)|+ڥ̯:5^tj;s5@3#;Zi`򔰵s/ c/;=(hS)2굽fхe8\x68[e݄/3+j5+*˅=}f pbf(`pvnƐrwbt#$] {ӇxWp|¤O\\ ٷb6gŏ|=vײ c3w.J/C&V;+!xGv[p?` /N>W=Ea.Feh?Is4~6@6:ٌ'׫zY,뱯y4,u?4#[o?b Df=u?J*WD\ KP$iE7F B8!.j طPyZB^i)y)e5RQD8N!B%)k SKv́0Fo 'ᒖOv\J:TI":R欨ڂ]۽d"\]dUq?),+>]Dpܵ^^-_vȶ'gzޮ(ڱg纖kՙ V7d'he_b `GpWÁ#4bCbctphss )dkSeCF]qg~y ? C'whmnا_?kyF2?YQbZw_߈W4;^=:7b|AQ'sg+@` fSqPȹ9"aAWm Ƅ-`A9qIo|Z4#N]\ݱ|lj1xl+(J]xN",mfRTy\10mR{oD8?? a.lI7km-4uI mu+\$kp_4L1)[*X@ Z36DcŊ8 %HT=5 = =_]b=Tw.Mڰ%F,#ktYHd)5B9OŒ0GitXeVo4F]O]~8gFĒ@]}EՌJh X 9F o1~E^uo( HT6SKQ2o2䅶e-TܬSST 7Q30 #Ñ!42fQW*KN,~ 7cM\cc CЪ\ةD6ڂŃ0ǿyj&TByXQWEHL9F h䝎my]!ʣI4[jy  itS?B UEɽ9V |/A3 @^4~So7qҮ.\|H`Em5Ttb{>NOa["Xnl9ݘEs[;9b|M**^l46T%L*1,/s @|bɬbE<(hJ))! AhY7/Z|9Q6) NdT8 -1W3 @ނ|4A?*Ϋ4!'6N>ۏCޜ?_zz@ǵU ),,Ĥ@;m9緼Uak {g4)؝پ: ~F;Wk*T>z-?/ Mަ{GN=sb&L!5OkI9K Zo`[r6EgC; O-ͯ|hWJ$̲Y,|zC}V9;Kֳ /?o]%UUn 3[G5+)зx>>q}Io~ȽߟN]+^ݠ;ԟ;y SZ/?nB^V_otR Ǔ@x4Ӑ7W{]X9<h>[?T_ Rng6/~8[|zO=kޡw^`.@AyޝIdWʝ=4v77xv +w=cȽ絳MV{{f|vl  #5HY\9`&؞SV!>'Չ#YsxUw Mܠgz_ g@<9|`B3Jy9F ='YS1mdXQ] p&a8<Afe[I 3dKT \DtcG+LLU9oA!6 P]r6ĉ!s @ނȗ*aJQ)V7t9 _.NXdQ*dZ%,L΁0y|56%W;+Wr}l`uq'΁0y= ԅg)dž{\΀J+GwO!r5S+5`;Vo RHI|U*g*6&9}16j d&.?Ɠ׶UOkexܓ/[Uk'^z15$Uwxoګ4)Htj[amo5-3ȤZi%P)R jDbk{cGRNIGEVPJM 5 Lys A>?L%žTUn UOys @>3ժ1Tj1aR6G ŘhDo#h-qb<.Z4<ɟaF#VEMZB2$L5M4aƺ/:N*zxҗ~j2:qem7Y,$P c v"__6f3*^F+d!on[/d|yIJy#~z"_Dgn|@SR@D%s@37xdnMcFё2k] U,,-! Pv5{C;؋fz{%{$raFKHDCmPBo5s% [;L"kߊq8JLzpE;H3|T:?x~>u0؆}עms6t`LX7u`d 7]|G=pߐ5G>bLŕ-^=x PHl|rm"-FH')e@k(*;yգi]r% xˡqg z<`1ۑ\c&=/7-d!TՃ}{֬zdm-y'C. f([^_}X.I~WLTVVbI嬍%9M2FǜV`S߉$(ܢPi6Y[@/z _cUօ[O ݗ HY;m*[*B`wO0a[0p+;>bKFЛS +@H +&t5(aö5[o/&<qyh^u`^5wW=tO>&sQy]΢tڻGPmzsIڏcc-+@{BMݻ>D1ؖM&m!;iim| ҩR C5h1%-A,Z` s!B,LlӭwWC+ZM:*[hW]CR &$6U#KF g( W50LI|ȋdG&+$mhNC06]9y_ Ipd(r,@#B˞]t0IcfWz ٽQOdB9AbQdmPCԪG}IאO  KUs#Z#6S}SCRYނeLIZVX':橧jH}sZ)U2j !FSiUx4&bנxD$_VlKZnEHГâ$ bӛNP8YE;E~MffQ;G{Nbbc>gʚ9_ЋL#aWփ,?HZGxC@Ñ}h44CX]G~_fWeVO? ᔎ 9kn>u87M97PpNaQ Y>(h1AӴhB^yf t\`:eʃ^pysԢQM7; ұ}:u.} H _qh<|, ^F,B0# =+R/i1тFQ4P@S %FY. $'GSS&Z7KFm[ZYΗafX[Gqh5:9讘9BC 8NgVϬ.,:~0cz`pY{89CFZ$Ie2IR*I] 9%HW}EѸ~' 0KqMp<}z[La껹EuNن=u tnwqW$k-o&Ѯ֦גh&D7n.7XHKJen89 ଑h6.Kd;l~!71|B@ 0.Rw nGߖvLj0S9:py9w-|;6㙁ՁA ux!놩A^9N2mzIM{}0+q u]$vZFtp[hKPn8]Yk_+1/?ÊNޠ Ҏ>xX}cD9Yi;4LZ\os`4ቇTNg`oXUx|_tm~{Lg&31Q<UTiXA`':}+7TnQ@)58'bQxti.__M .U=qsbZUZkѯ'" ^/J+9w)`fIEifXt0-Ê.Ga}ҷD$',czROWf,qM!S :>LgQ+0![2Ee2;&zGwSUi[W__5㼮 C!fH Y@SRrQ51Vَ)) v7Pi5/r㋫xbhרKc131-=jONt,'QGnbm 3E9)qTQܦ:2,"1rbrX+?"D4vciԦ]9qjI. Xߣ}Z&#z!z )>lQ|VҎI:8ޡ5~uNEKYS7fi/HZ5UZyg"Tg#zuRe <X91Kq12@JF/tJ 0;Jxp&KaGXe&Ž 9^8i)(ՙ(xy.J@҆,}X7Е2aڀ~@eR*I%VDa:Q |@];UYCrZ|BzܸWA*<"*BwyL$Dk@kV4\ :gNMq*j-jD^_V%O}Nv_G> "j;m2yhDU' 7Ag%(y(&_Ug{qXg:plo` vQ5_K_(X Oy^6>5*˼Sgg^Exe&#QHYu2LwZ D佖豉hj4BZ"\ڹW)?fpoK=W>vV_90$5AZX,Qw|0aHB9U V)W6v6Zci:̓ke;j*:݇aU:.[V=UI&sqhHk9U6"GktbH N9іx930rX[1^}{9$/ NN\}|m~&N"ъF#LxN0+D{##a:0- 9r !E.qVq̀1 A|@N@`k?{iL"u0vC6#!: $S, A9t0IMru,²:uH;%Үa'EKY٦JEzUWԽs uTa7х3] qB@&`Gge)jᰂ" qM"QQZ>MwX;#Kg@f~VQì#,2DDD)OGC*Gc1waʽ38S&|`[sȧ;eu,ILWWb竏=3/s&m|MxJeD\Ǒ J#c(Ҭ4R kv4HcXH-!x; 2TyM$F,B0#BXYL^jG1h"5ʘtfm8{0[x,>Xݥɶ.giڶBɐ/ì^%Sѧg9FyuHA5IZ366sj](FZ`#2#ịUh4BkM̀ǝQFGR9X!2m8ʳn-p6ks!wSވ4gy&ϒa-yBÁɍuG؝\b$p(~TJ/>b*=Rp >UjCX3((dQ Lys`|Wļ*-`gezU[MJ֒iTj Cm5nlkf { |]ь>X:-^bҥVzXT|ikX'z'~_}[Ƣc\_Y}V-pSQq{ȩqUZϖ=7I7߿P=3lxƆ:l~Є{%ݥ _EHVT֤b 6컔jri%Cw2/wЦM)U7[7L/iS|x+'JXė_M^~ ӟG~x_A7 W $ѥ^hJ9+,:NKg> _=of(>uLn!ą>q/3ad);sn5PuWO.A;1[WRf3+9 E&´fm1( Du&uHݰ mj*qY>͔}vneGߜzK}XásPK.4Rj&H2sQ$[ųvӫT iP.#Xꕎ~dIȤ hJ-'dx*ىSk\I\] 97ԫ68L]MX<^^JK̊.p"q%T8 y PDpC%K9 Q ƑI&@wri&C$Km﭂MC۲~I 3x*y&./0;`u\}<3:P1(/dy05+ImP2is϶ 3Y:.an +N]wnn/ô9_P4f.}箄 +:y;fRp!cdu{l0isf:)Ҁ'R0yyaVݒu1DY'7_w:0"DWx7Ѡ7p?B~X?̧>GJBDT6D˶ڪA W%\e5,F5`faOp<Vwr{(2n-;7x;=Ө@7F| ۫< L_0]ţ}`a],Kuf1+D8 cBkZi&20@*h>;줧/B1.5Y\o+R=R57im8[0ڦב\ZJOĸE*%h'z=9$>3AƱ3#+$>%rMyBڼp/߇_/wqW5rw50gzR"]EscC)zQ'S5?8R7ܽ sޙx`D0ލn"_n\Bh 5+8ܡm~i ft nh']a9`- ;Nu}fZ34Pz-B3sG] ZMfeq֭ʼ"otbsLZ,ew.[)Yi wNW ~^PVQ'Iten&I񎉧tɥWڝ; RF+J>}{y[4yD>S>I A3_Zkt0{~;`BJgpE?ئԪ) {YߺTUn!vY][|Yr}rP=TҲJyb_gZVIk[p99њ2DoT1#܍|ߡ#d_cQ֪t(/x=ݚ#.EB2E0AgAf xL:(bRJL C OL Rkp), Դp(3x!dRBE9᩷lْx./&i&4dZi'֎dzϗvuܾ|3^Dzr0A$/&|JJx Ց@aYDfXZ`M wI/:ф/"aLm*UZKjud,`Фɑ"$%dрۨ/bpc+P" d(u*6/-ǜб"׃1/&KQ/Q8|TG2Ih*2M(W` ) @&ʐ3>[+rAVD6PxPsQTU3(!6G&kI(9,NK(E) ߵo}cók7^O6P85FE$7Ha1%37x)E | { 2 "Gp;+rJ{zƈ gKg7~J H>ij gfI3uOY|nh>i0hYfvíVx[{anl2g6.-m\B 274 `n}xL\L~ίٮ}j~.Oob;/\}>z4md~4:4/UZva*9m탗9%a5]Po[<ULLq ZHZrf);zP#9 QT60zkzA%7]m4L?nb;mGﯘ.y H5Hg1{ +!c25`e  | B۬&:x%!ŭ/SdmLjFi٢sEA 5"R[gʔ YB%[IIrdD?=ڱ }eڈVNX>i >L LO W u6Dhn')ni2^~0uDH#,E΄\/B0jn΄)mlBrHmV\޺T+4,K]F(%c H5IbbAMPdkI G~,Mڹ>4 RձcH,Li$$ YiQ#E|`RIɄ@p8C:²UӪUM;]A)z,>w]SΔ W6Ƣ':V YdA'Hn@xc];>Y3FO1hH J/"WނQWRȥ4k^v$6E0u` h)DJOIB$1˹ -SZ!0o"/FGIE&4>.Ʒ0ŝhtʒ8im]^t>箐Ybi Y>cW Sy01REz3Dz ,Kf<#$-* 2MJ(Ȋlg/oNq/Qyqd<98`rTb4$L RFF@*tb>F@m,Yѳ7C-%7dצT+J:{!}HVX  JZ`꛳hGov`ځ q1v%M{ɜ=P@jəMbl]n7CBJELɉ /ҫdetkRn+w(Ϭ&2r DJh@Y␝Gr2l1<b֖aftfBD 8Lj:e!rf-nF(Wq'(S 9 }36Ȏ+! ~`%nVg%Mo+< ?\cQl<(Yad` wD49cP /'EBpFdATd2T坢WB8MLt/͠tNe-uߜǿ1z:C1ݦg3(oMk)#=b(m1~R}t9a!h`n00xm,UM̍ގg63MZ`Î֓@%PKkR2DW SQDDg! o#D:3a,A@Ia"ǝK^z]w^\NJѫ/}$B8郇ChfpY7K9^k!'1ywmH_7Y|3@X X OOۈ"i%9qf[-[ՒlQtȮǯUObޱJ{*;^mn CzJ(aCF颞ͧ F#W#3 s^E"?Gw+=e EgZ$^XHO;4tVR%Y,M9e6sisn͎Lto > 6IVe謏 }F9ᔕxF`@Lpy g-IrvKLlWmO2INQ!oT1=^}Zx Wh`= S7O/>o|گv[H3霈VǁualcrA̽)PIJ qGEO472aa-1DUa,&"e2r>2,9RJQ2al5rl1.t|,<7ZKKe7%mݙva3_-Ƭ>%ĝ|e%pgUy (|(@RQRb?oC IӜTu 2{U(kąCOgH&0K%͘ BZ@5rvKF>w]{ջ Gn)$޳|!R*#Ws0ˌQBj#0OLd"*"ޏ<Ir&M/ |I3wK"et<)?0$ek G/PMWq0f寥vmP-"|߼y^}~{R }|9n>t-Ӓ x8Fކpz,hˀut«%&12C[6,u$ۧiHuPWI{Ӝr^27Hf/``yֽv V/'P/xa`}W SL.dRkk@X DV5t\vN=9ix6m0ͻ;uwtvd:6f\xudZ~} E9Y{v6t[s A#v^ؠvnn2vƒ3[Y.qm7lx2ԅJ+ƺQk"b'258Ox+N*F|&3oE'<Ȗ{ʡ!>{?\5YRg/9n$ " dQFiFF/`@xrc[FJ ϿLFo _Ky)))VȂ "hu#JJ+ji[wUV/mm!cFA B/c* X`S-22z: -zʃB6$ K(ɱ!N"BQ䖖"&T>d]=]Nߩinڳgg{ov@|m2R%-,Z}|!OΛ! 4LMX`d펰%VVF*R^3Z!R&4Yt))*PH2P¿SK߽ߔ -oټ WY٢f3قMBjChOjFF?Y+ϺнXCT~ ~|14"Nw$ >0A $h΄wj &:3XD^>W bc2q\4_f~?<6nr>4u.{SG.{Zz1^| ̺m :c]@2uz;^I]Xn*Pe80]U~Bڼ> 瓮T:Jm|mij*Y^k)sE  }&f *p'|`NJ ѳ̃єSf36Gcp͎'qo > 6IVe謏 }F9ᔕxnF> ϓedue=^ގc@mmN29}` /TA.\jrS7O/>.}:a`,j3霈VǁuNalcrA̽)PIJ qGEO472aa-1DUa,&"e2r>2,9RJQ2al5r,;.t|,ŢjK_{/Y\m~#ˇAR_~݇ۼomqk&w?ÉW/YeZfes ,ۘe RK>v ݿK-4%gדyCCd4%b8Wkt"XE1G=3B+% Bp9YK]2LRt4R +N%F[e[Ȳ1F%Zz+[P"snB Enү -dP„L@'"+}ƿ9sB0ȤD.,=D3B*J@T'=p({&yFP+=p |[%nEL`zVmLJNC,:n=B8ȨHWk~Ƀ[}H^v^vvCZqR]ŗcY D5b%u(t~#t)oo{sr)G??~<[z {麉LK k7{Rj޴^nz9K9ڡmd]@;j9ݜLqޤ>M{>"J|/A2{sͳ%C.^Xz8Pzx SKkjbJw$Z6_O LOȬ9ZZt}}H8^݇ES]$7%߲13«#КW[(jݳlǜoKؚ7_j%bvwᘸ3قM̢l1Cmˍ渽a3œy.TZ1օG_E_u6i >M+CQRBl;u]|ITp  lyJkdJ %sJyQ%wW]80dfrNIxAGڏ>OBu^W5O{zݟ;YXL[ "K$nj͙/ -o.]ᆋV.ev1G(/%*K g B(O\٣b m/>~H˪JgR='M DJ3`FR>KcmV4GVz պn7 h-+ W&qO[J-c%a^89j׉Fn|]tٔ&B+y/9u/^6eH]쥝y嫥qu;U|-z/vOF6rD(MOXXNke:f3g XgʚH_A̋wbbYG*<îv/:)X$@$AG ,z"(fue~Y D!yȭ1=ή]dZ2d,ZM_Q%!bJ!1 XUqXv)42N6Ld(l2!V(MX}j鈳W_/ f0-wj 9:%&z F2ʜ}tC tb@E f$A!Pފu=}f'`4)9I-)t>euBP<{ hp=ժFz?i >d +bdȀ-uV L#>p6^qTQujYM&:$%jZ.{πi7. ]2O E$OӋ%/,%#Mg[=PƩ)a2ГK_59|2|$Ay\KmYj&Ln/:sRPK Fq{xԃǒ5˥7A&Űws[鑦}d^niDY$Q68-x]har+!|(oL7JE#Rs&$,6XJtkS 8'|gFnH0א0q˜VbXabdɀ1jV‚luD幉I8o}_ =+žbFW˱6=: R.ӴQ^)on팦\q9`z6xFAZUճeeYŮ']k+vs3~ekLW{ޓ⸛M~OD48[QC(:?4J]OkOJT :kkV%#9GvG^dEpi9$f88L;93:8nWiD"S!g} RܝE9mӥܧXI󢀭$C*LetDբ2ٓSm>%k{vtvHA3U-d"򟕒aHKm9FdHGB̖*II%}.ĵtArnI@t\q.b2A+֒C[Ki`ݛP8eJ\%&@zz)UD.y"ԾYP;.%]m2-E@F.H^$-DH X"Y$\:KE^"An&ejl/B1K0: cu> h !i HO{V<E{FGw`~E ry>۬ .^t\"1"h,?2 4u49ZKJ[yB8 )K2vJAkYHgp s01FdlђhM1"ySR\pAj.z0LѤlm L-9nȇ]Lց)$$7)DI %g*0@'z翊Q9nqs6r|?9wnn_o_?~pS/r8t#l)J[qfu`@ά!}x},ɁQˌ""a:A8Z#,)ieӻS%˞%^$Yh29rK|G.}+sjt[M_=-Y&<+6YږOƳmὛm%gnf4=ݛmnE_-ɝ]܅%ͽ+Dh]ЏvA?!>sy=]w:[gYs}5 /Z^yruO=|{j6ZÙ{BibpH"11hd+G"9!WٛP%zVdѴԖB4qzepeOXfmߜǿGol{ᨱ Q8T9U7Cn 0ld yS- ֫iYgH!KBT!:eLG$=F0˭,d\+ẖT's{,_nFϳ?긥As>mܮޙf=&Y"qݨ=,$ϟIwwZz9[٘,ЧheB􍖙7%A'!?ֱd7ΒLWلɰu> eEߕ !5P_\zm&]j46A2# Hڟ?^n\[o[_ujË/'I/\ѵxZq̱jڳ~큎MoxA~+eFDeREzr 'eY%вv~ڵ rDU&}Ge4(F}~wޑN4;-G8Y~;?#okΐG{7.۠sxtui[ZkEKD}PX.MT2JZm&3!Jt%V2ǻbԘ5㱯Dnty_k7By4V ihY_ڣ#;A+uQe$A 1;]_Oﯧ_M[QR0ϓIы,5L2Y*̑X5K>r0pPgTѡ@EF%XZ!պӡ0L/"Zve r]lyStQ#xkoSx(#ITS4Ys-8EA[cx QX6{m }C!H6IugSѡaMڧY9&2-g̥АMYϴ2oyWij4RA̿~+d/_ozqV lKzb*3!rXgd.Ǵ 9" "C{VCl=ή]dZ2d,ZM_Q%! PR$`3DzFHKF`І M : 6%cz2Ϊu#Z\}$`8!%XK@%Xtkm+GЧCE~^ eyQnlEVlźD>d 'HC+MGPk*h,Bc#BFM(:JNg1YBz9}dBqcuJ咸bU)) Uc5 @C Pk4o&NvҒyO;GeeKwڸ9hѨw =81lwQ md@?zk,ڣ.%zruy!b[cZٖcW?5Wh\Bvϸo|Qɟk (mE@z>;9bn'|RɽNU!hb4`@E)rpUTҒJ'J=qHg{E*GR.F:#R4&X- ^WH[7n!ìj(ʁkvO.G :ja^Ye;?MV #CіɇV-d\O-o¼M2xHAB[AAGEQYI 42IC)eވԑY#\NN !1E}i!₮%cb~ A:v#28+dKZ]Sfob Ё%Kd")6Q0܉uΆXoeV:5f'q^Jb{]%V[)ح tZܛ1>MsS1`b 8 )qu5U)FUIRw;!k%_SE~y/8@j7.uq ]}a>anh!z? *;$qHetSٜ( s 4FM*Au$+Z9Qm+lX:Հ#/gr1f>sϑ㫻-7}vi{=trhLt;#PhhޢC$ﳇxSu%~g]u/ȊϔBCoJ?]_,99`|Z|&W(^iw'\ە;#qJ@!ʹTRA[Kx%lN%lKq@˙5Z2hEs,079mjm LV570h*M8\㹹ҴBk(o5 0\`I ) Sü*2!-cF~ez;*P,Ԍ7^]Y@@wҜ5Ga&Daqʏ 5QYw1ʉZ]" *NJ"DA-&J 1oS'9;w ' sP֖UTuR `ZLHcqT:#83wL09#ʈv5-B$4j ':=蔼 dGYLLY9yůņ%.vPsn 7*Bs /@R6+ 8ՈniTh u}yeσxcFa"o]ow#xtZt1hМCLemՉgg γ#a;>AJ*kXRYᐽ%yZ.X(b5,jō[{'fx|7si7RiЃFVU)-wjѱNQl9j-Fƈ-gGg!_Cd>huwPb_\dI ߇ح9[ӭ2}˫k ˛ww]ƥ.{*6 y6{?Q!rmo}da1zٖIcwڸ e #[j}M_o6riLח{wݗgޅ/uK%Ne^8zr7\T.Iz!wQ_-1ZN{ݑjt^!o,|kb,5uxEvC]ŚZjZ"*|:|rrS\Q)*(C4檓-  *6䤜VmXVzj"* ti[Cf[=V GufL7[&YnrN%VtВ5Q%#yWK))o— ڈ[RC2Ek踐&Z[UؒqPqgfq [s9sZCrڋ0ňt/ ōl*$G 'A=LgoҒp'-珖:B 4|OO ֩8/k#=D=â-Uo \ELN;R.rJ<֮VȖ#]?KӼ%+p@PE<7;|uG+?^|tyiD#.!EWl g?Ez<(oS{7Ouz dYܦƴ-0hAWM?Yf7??M ^*_/~xry['l ̕'o{I,os=gr~LZ *T?o(& hcGE?:cв9`|-bJdF@ʳBkdۜ6`2[ "LlI6&]O eY9LQfTSYoܛ$&ɽIroܛ$Iro97I \{$7IM{$7I-?z$7IM{$7IM;:Z#`QSb!%@Z4A5g%$D8\_MI8+R4 հT8d`0IGM]v }4ڧHA^Byuh]e4 -]ث;'*xR= JNnd\SE erN>z g"Dmy̭݆5/Gm07SmfɡujqP[=5NmTR RGGT—|3o f3H.T5@0B,u0*XK6+QS/V\euem&V@Wc2~a뾲HHH Q?kQlp2FXوYb4̠jlY% b=şσQ)CPAj%Zj^ VlR]HΖ>JN:%ޞki3̫zmpd=wp^\ӑ\x+}}#gI TR(RakOim}Mi4'hzJΙ5ZrήMNNjZ.U-4rIki=y77ͽ<X?d^@.ўHL U,Uq )Um1D3QPJ sjF獛cymA]K`QI,fgXܷ"pb=qf)1QUNU Bڀr,BZx*dSb(ަNG!Kj<1^Øk5jVE;dBJ5h̄93)@+#z[U ԨM6GPWmtJQ2ẠMgfgj=OQ3xUaI ܪExb!B9V )VaHL͇S:M]uYbxcݿs"tww#xtJfhCLeaⰇÉv}`MgWR!V|!{J.9,\@_˸H #VO%gZ{_.6= Ip7EpM,, ˒W=3Y~,kܒ,Qa%bO\9C%B־)P5b,'c0G,6c2;8_HKnYϜ-[Vݻ.Wq6ؒf"덞~a2O n?n7w9ntf˳ky6e-3wv~cw;r=L -ݼ{xͻNӟ|Kt}yްQ4/!| 6ֲ9')Fn56?Ql.Veyk|^2֪*cX2VYb "A#?RPֹK5:+&r=]Nù[1]A&jY O{wyy$` B0ey=9boyNN'to=r9H>]go&hZY (D>B53 yQB j?@d[@aHG F2>9,h)D<F1.pRDҺ򥆳rj%w^ٍOٍ|/:z^3JR9Q8ez׹8hHƱc)U1H*TMJ{b١fFr_X2/d>ɋ]̋4sTשj9pvKL s*U6޺FYts"{muM?QrSb!0r M:Amkl6ɡgiK:_=Ӥs߯ ])sb/l 0.hΜP-Y.>ND%ŲgbPIRbeyiԑh\bS,8Rq udK g5TY))u*+$~FʷnʵEEU1D$XV)m& 1k$XxG$Ƽ1{Ҋbݐ\qvzgŕO-}W>m*% "J1H!"gHEqfI\1Τ! 9R YWRp=z D'GqVl8;_3yX sH^K@s~1 x8Sr2 ERȄ@DF&N@I<~2!Ks*b$> ` J@  TRbTφ5Nqx4h>F8cUmL2h\ `uqqgM}+nAx;evQe#cF A1[HPpzMvDd1ª6܎~-7lӋȾ; h F%N$L\eT%q|4&\iX*RBEF|Ӽz;W&Md<]>.M@iDN;Y$$d$H(œM)Ͼ{x^p~7f_PMzeLlG]_Աg`.Mze{n*[^7mnRPGoeNt6zo˫ sY'Pt.S&$KdHՉW'"+=lC?QLORp[p)jq4 4d1ᄙąI&Yfc Tx㩎F&--Xi#1H .*_~{ۻ%I,m:MuY0|*Y|>9]%1B竷_RmgU'koǣ|c_Jd*r#=iOIV[iXn!q.z\6"Tό9 u0 LjR$j pQ{2w'΋=@-Wۨ =$v}nwCcgW]]mSܻ/lz) iH6H !u)3>H2 OZYPvh(Xj#V[pH.itJ 1%^g g;Y ,dzԗK|⫆BȞ%75,Y#K.ZF+^ /+Ҵh]~rLhp"> o?0gH}F4?|1󑁡JJ&?u],~v=l/`x0_k\_G8-\v0QM rv˃ ;ID8b'ȇC3 oț72('/YL| 6+Q.`׮VR piqhKmc0<{Eɠ˄X;?'ܱ}/ۯХ e1B?gy9Ԝ>o2V<?p'ADιkPukz 5,v̻LQdk}fi&(AsMb5IhL~hfVDn2#7y]%uŀ3-w2=7o_q V"v`z^#t XP9eA `&DKHRqnhc] ߏ);Η8n9 !bOc|T=mFw7嶧ï?w[/<ƛÇO?H`U U W*@X+rP{YjBU W*^ګP{P{jBU WU*^ګP{jBUtt3*^ P{jBU W*^ګP{j 'KVkSګP{jBU WHTpY.A*^ګP{j*{UL*R*^ilV*^ګP{jBU WYYJ#KF9EW{f đd5:0Ӏ+(2G)dgG#y0Nd< ȿZ?lxn o^LfWa0r @6 $Eт~,KA׷.,gx6UO1̻-mnې3ڦ?G(v~?8A?-hW6$ ugzN(y[`0Ev_zrB3n3J=#w ^Tk t\+B{puqb`UP2(7*& SIzXvRmP/ZRKk}0 Yh-2FˌNsFzsrk[=ۮ$b~xg3_x{N48#iqgbhWGB|>0n9Yiwl6ni]6u <\]=Եn˂[:6a7v=欔L"/Cu- .^l;ebQT/ZZgQ]l^xvIg(oEMぷI]HA!yOR2=Px R)სm$EBJpg-Lp,-'ϩK1KTYE;S6:H!z]SOj~4}x\YRc 6N-*Z}ͼ%DO>E8̂;%S4&1Ds~5  4q0Q$㘵 F s Y>[쐈މ- LrWNlRMK]dn=H}V>ߖSp*űӰ#4=; rS&$Hn+%ri8&wTDF I,8 ʍ^k0!=8 wk\=ޮv y 'ԎƪnBkvKW( g's>SJNDJN@q| MN24s׀5ɻM~`S @U<5|%UYVeYbHB;Ϊ%8ևdqbq 3A>D=(dS6lٞkzGwW۶3Yv-ztF; Vmi  7EæѸIT!ĊZ: X cB,;X^g -u=j=z=ޮ!6\蜵%#Pi9OFF2DR>b0z-).u),/6#W3e2^fkcUgzS+tJybSO~—쏴lϑB4Rϓ]kw$nkgƽXCuSZdʻduàn)nbѡS#܄j7w MFS,8 ibQ9!U #t@t4)ť[bX!gi%ﱎV[-S2Ih8=3h$\f%.nŕ^_ljH]EZ"0ГFz uTӋM<˷/ަkQPNJԥMB(rT%pc K\+5#K ADMo ;㌧$FmyEjlɓ<iH92#h$$рIYC2EʀAz`B^*)؀ 6W4t .B]0 5MvsF貓d: ⬴ Js48)2cJ,@eKV Y0vBP6j4־~B$th2Ԧ4USN~e龲^3/R~9TGJI&mճumhOmbAG‚uqH6n7* pPBːjڌNɥdOߝR༷1qc,#w; JH5q*g[I}+aM6cs}|}ywƑ1]5Goq u3.Hnc߳,[Ͻ9{qH7r=UfEڕ@h3g3Ġ?!u_]]q;Ipخ}v燋kp.rPɮfǹoW]Rn<@w`Rו~%Q]ngږв~ʗOxs/u[dn-7?Rn-0:E sDk:|S!l(s+̩&aXqByh9kR됸R 1Pl(5sɹI%Y))K9F+6XZӶ&6k2}׼GwttjI@87q L>KaE.Ki%tfP1ZB=sX(5,]&!)dD7k8M4. menUc愎U1gĂ%`KᣏJH.:MWyiW΃ \&[+)" r.jj@:6! LJiA ?E)EVuu:&뻌:dwsI͢W9D_VyFcRe#1q4@ʑ,yvd $ŀgL2I'~>ZyS2ɘ>gQ5ףapwk` )b 9ס1ZǜEjt-<󄠟yf`Oe1@?Q>lt`^@e>TxHۖ5rt}w2mBmH2K^pH0PkLV8!.'=iq;NaŤ7TP[3OoUVt􋇬2Y CWm2SR(v@`ZFet1{όt$ ӓ)y1[/"`rFST\\dj[jljr[XM3vPf j s-/4;4[< :7]}4|ڔ"ȸ d(.Ct(K]Y2Yr^X4w@F(xQ&(zQ utҮ&*YЦ N-q6[l5+XjڱVnl S.x#9RJAGJp$Mf &HE1:(w y9gzJaa1Ƭ)%mVC |`RIq Ʉ@i@ L,455iaEEwΈ^YiJ5ԖJG+ЫK _!0z(//?iʳ ٟWW׃Ug]ݍW={7ǥ(?~xvqPrc6v(]"~8uWWa3ݍ3PRg`ĕ$G.)/"K9-g2 nY_{ F˗C; C.4XCŴF??U`e jy_J6,f@냷wtKQipwJhϷ|` }U#uf.={AE]kkvѮ5ڵFָmZchkv]kkvѮ5ڵFhZ]kkvѮ5ڵ`mrQmkyZ޶mkyZ޶mky{ܜK:([~7JF(%zDo!ft5Ju6J)z#@o7Fz#j7f z#@o$DiUX֏,NnXjE-lYFW]}ot(d pe0:Z(dxN7* Ɂ44 i\s0Cqnn\tSk1%IL- Mp5E7W燳[uO} 6ҡ?]M^_0|,N^ VIH!ݫ-$] 39wE3%z!6ܢcBɬBjxcPڏ;Dˏ{p;ŕn\jy7 ^J޶+_tbڬ)Y'SƁQʎJbwK#:}=^ ̔ӁE% ĸ}'](dx7>s~E.j V|[-oF;cd^oəWNPiY!jD'-5‡/j(T5m+o s.OıtK"Վ)e&A1+Tʋ+U΃#]aBȔ͟RjiM'oNzʓL]y^zݶiFdRo_ Nr=Iwë~YbT|8ږ3=5w!E=FKl!q%1$vQVA(8!P) SB7A}H .fug?̋@_MJD'; :_`~ȁlâ~-àyls] ~?_o\|E-f)37t. ?`6q;+tb+~6n'KQ'Ww&y,g[7MLi,];D;P`l}/Ny'[Ӡ7hT`ֹ& R?G{-7JeEyli0yH`)8Z(())DB2 YY!Hh9e=:# y3Гi]&;+0БOٮʝW>ZNº|+}D˩-J/_QqmSLf&DNkv2EY%%%3QC*]xCn5侲#eNOZBSJX6eBZ5}ʘLQd)Bk2KPZBp3jyR˓=r 0$3&g !UXeҋ BT}BL pT1'UQM'v* %ja,W9M{/vIјyԂ- JU4d#wތv;I]}2홪Ȯ9C 7䞊ԔfJ>z*AKXiXcȱ'Ao޵Ƒ_lKu/=gaXP%!!Q=PRHJ5,iήʌ5eo2͇lG͂k[N[ 7K@o ِѶ~nwu-nꘟޝ`T/m-dw{mɽv{ۢ{![\W [!/zq$>n`6o\_R^m~/bF/]>^v|bKw۳N~ly]mv (TSzwڎ-آ.yѷn?0w0, i& aY }?<4{nnvnùO 5J`RVTLdSVT-ekl43Aɟ/֊Wi0(ն/`cU/W[e8QYo{ >F͖o!nD (YWPFi2A Moܺ]ߗL1i;%$GBY)HE[ e>p/ 6U$s;Nk;2xVd'OҗOS6֕$.HBP1d#h1^&r>|Xʥ֌%Ie]3H8]d3!Q HTc9RHx5̸MJKf`f l$ :M:,-ς@W}Ad4!uG}ŒjYd* BBP*pe4Աo $ C()[ #0r)Wh*biͻCs lMRzzx/(`( Bgʈ "x^_\^L2+mْC YP%Ctȶ*D=q.kus1*55me9!9QJrE4h֢ȵ@)JA;!7FR#adF7[r'RJG_Gqu4~1Rb櫈ҲKۙ fd¦$  !!넀}RT[48Dыb>%5/fZ=e [A*oŦ\ fn 3,HX.#KD +7W*KIз*L$Et Ѹ hڡ+Xo rd%' f"B6J9+0O=0 5SeA(74 <:b&[PdEע0IQnD6V*؜D!j0m;ơ)YnXJKIx}}ƕ]b-@P:ZpU*"[9/ߋl8n!Vor3E UJ+VCZȴ  l3hBE+Z#5XyO`P]V_Ej*iJ껝2|̓cgd1XRҫ^P2<` kNh d BX( 42X !QP@fE gȺYM"SB>7[S?=k $ˢء Ä#|( ZgWLUCop`RH 3.ED@c;XCgR~ζZjuY|Mp&#%Z92zb`5H; K@h< y$;'b2J'r(e{gHP(%G;b`IxL' e_[GńEbt@sR|1FCu"whD"03"wf0Ovϋ`7\p %cc;Zy*AsmZIƫE G]ZiǶ d髒ѫіBA˥{wS@yE n9X@`AZ zPKҨh)d]fb Ե"az j{4 #*AraRV>h3X`GWѾ`P$ 8UwFT-ҶM5xYI"; ۭWXdy\E4ΧL'X /F/"G])k /pC4*r](_.$ptD-fx Eb H`R.am.F0Z TBtX<"t5(ɮ$ݥ`um>D4;ZMV5$-^vzP^D pdI).rH"0S&E_(M6rP+(ƂikdEҞ`< üOD @gI&B@f:CQ:5ZO'X+CGY;5j% ,PR]nd"*^ `N`-FJw+ a*b@0U})w ]LֆMi9X}&nA竓_ǴW'guSsmg&(LPԍ7@7ӌTFBƔtD`?3s2hjS115@r -E. `̀P^F0囆ŒТ |_wäD6@eJ e0Ck܀D;5tRX T@=RS!ոZХ D=!/5rbOoldjlAU(ϬbE_h"tC(4xrp.9 ZE^2 ܵ-1'RvZ1T:Um: Y7g11rsPud\GҡH2D*Q"n6V*1sH?4XCPA$JK@SnRQ`2 IH՜Z'k呵ǮAVA%;cf1( !08 bmQ1 NRuתZ8cAP+υ4Sp&TZ`H@C爗3ĔjUjz^. \9et$ʬ42匃Hj˥ Kn [E.Z2輩GBCEHz+3Հ[o>k{qf#9hREIyt8 OלϗXmt4A P-L$[剷;t/ӵz ZevȞxKr7"+v=Nؿq~7y |@B*`;7>ݷܳqgh5ۣ}@w_M+7wgy?? p R5V)Hܽ.}n!Vs/]#e}J+vMCّT`?Uk/ yjeԓqzJ9*|J[Jӌ}o^j_sTXS\,!m5ƨ`Y4Biۤ0WU;K_J0Jw'Go9NWG^Lߖ/+?W 2;@{I=ygPƃNhgYNܼ{u TU E9 ud ;ʢ2~I>_{|srog r7)7l;9;:ޞN~߬ sL7gSxw!p0EQHM^2 qYB'*$tFW09-UII tBʗ .>XVL qG@ˁBa/ĝ/S}$ȥ]u^|_4Bw~ߛ\xu̺}kzHgpRUD9Ht_,*HwWtKoN#A*pbvF/pT'BZ T} oA-Ⱦٷ d߂[} oA-Ⱦٷ d߂[} oA-Ⱦٷ d߂[} oA-Ⱦٷ d߂[} oA-Ⱦٷ d߂[} oA-Ⱦٷ d߂{ T[+$a]B5Yah_u)o՘i@έ%EI(( )ᢀJE \GR=oAŕ#[Ub(*bRcW6(ST~wiS7U`JA !^Y~lpERD0)b80F)$6c$3D`NZ-C922d#g{Bz«{˥^up4 5,@ ۱O㗘 |ۥc?愠YGonj.xJKɀMƃEmI WZrV9|ҩI{IG>5;<i?뽫e5~<{Wuӷ=+da>S,dWwX]T=Lfnԟ}`SYva, N毷g"< 4 (K5[)Q1u-tN=;q8s<?Ka )<7f?=QM>'2}` ]P&$a&r)$D-瞄$a~$L!+ Xi1 jy1JH JTR]TWRa/#zO"bUMuUri-+ԕJ`bU"KQWZ]]%*eJ+pW.J'PGv1*K.8Q*QEQWF]_WXN5\[\Ua׽B uXYaچHEXkNzwh 4r{xIhUD\SYT|wU3ɓλ2Q콩z;9~?{4Q h2 . 8y:棪JqXt߉W,/&R"hZ'% cM1֦%uNBƲ5鷟~~곃&;Td±ji1c=4͂ey ,5>fbr#Vu[&򖏫Vk?ovҒq"Xazb?u:uԊ^G:Sr ?N?nWS&vA _Jv)*QKչD%E]}@uE)nzFq~_߯[jr?}UpTGV]??]B #]E1HU"Qaᨔ>+U0r\zpG33p%aTk.田 8O7-wV4R=Eۆ?[rbg)8{jW68DHתȠ1.*뭭sɐ_&#&}J^v9}/;ehQ&qxЩh~H'8AYbLdN9Ug,wFrUmz6N r xU*R7˕O6WuEZuL.m:3 F7WeeFsbI7@ k&%f 6r@0.Pk#ḺENݳrqǚqo04R"Kº sMX ̠ԗO kULdZΨTw 4qh-Oo5"HV`@81 y@VI6jͱ&vjA`B㥈BA: m$<0.L*$`_jjnYёaHR1`k =razka?Ty;S') /nF7g?ۻ@;E;  [CLBE!A|`"hyq80Ⱦ0i/!ս Oᡗ^gn_5]@ݦ~Ee5ӥ5$'̶ *a mwP?N;1QQ4Mh(h(0B.(fbB\.%=U먔%#)zY"S8z5?no2Ȓ*WQ 9`HV0*b߳Kܛ})Ď(Ď3g#\'xD*H+Vj|P0N7\ZG!x0k5f ʈ2b=6MVHKDf69;4=t&#ێadOUn4ތG[{y;L_T_gm=sLp6ev}xV Ov&r#iӑ ^M6OfS@brIUatU3z8-.lIUOFV0jui7=g3? >y00679]xbOws떉N4kkWY|o^vi[" y_d8,.vH˞[9J,(d.!>߭{qmat@gۥpBlK~&{~Ʒ(_[P~ I>lv3]|HZ+a}r4C 0#"G aŸ IMQoӿ|0ar@)Fv 7 lg!:jjZj ˯_TͬZy{lQEB(PjJ9lGxao t1Fрʪh-JxB))&zSNq%hP12#XP1g&ʭ]3nRN qƾfօ£•}:z⎉oS7L~32qП޹:DJ $(<܁b3"0pzxt~X *#cҬ2J &`[b3 X%>Rlplv; 2TyM$ 5pV30bHcD2Y+냉KMhA #NbSQD3ٱ0bbƓodaO-ϗ]/j3ǶcVA&+yH"5 CbN|0Gւ8ccSSV;D15Y}+wtGswuѩCsmG[;` nx6Ekeesf~q襤MK;GYeEnܜ`"{&CD{W@O QDpC@J,s$#9&mE@F Ooa v~t{yZ`6% 6,jбC5/<"/Hf"5ϺHh\]XPP< x˫SK{vVWmNDfms$SYr֎ۯ_+k H}"S_ن3.=2 mut枅G^6%ځG&!W3w ݄1eL6i;Vq`q>ūy'Y'"rX'j|.(A]YQq8f:8p6 `0S !LC1jivaGBJƩ3jA35Mh얘"xav?>lq-Mg:~SH9SKRR,R?IL{'ħ7m2k ߃6r!>>£G&!X",N#_R I8\/?xsR<, 5eQ,}KvgPµ՛ 9*Yg'n?}f2u3|LЂ6Zµ$=5yX'õ^߃- {ZukSL~^FwEͯ7gv?N{ _z~?1iE>c9smt?Y:@/Oz]ko+wᣊb73},0OEHJ2-dY%K(K;id)X'l1FbAԜXnMg=&Q7 b06{bb)ݝ3|F.Z;fXr>l#8(h^yˍ cYt. r$iuVYSUы2Zr@eL2YY!H`9Ғ Wg{sV|&s/9---_ )yWKÐQ~+$jK3-{|AeR=h:edieJg ggTʺrY'k܂;;{ 2Q!@3q3IHLBI!8$F(r,;2NRY3AXa֖IfII|"lgYZ~HfamM=-aH@%Xt6G'8DH@dD]$`Z0t:i_Oz ޳Gm)9I .ٳT>eiuCe4<{ D5:Ujq:?ifsF&d2d!WKjx:yxgb耣 pTA5(ր0KԌ5=o+ UR ]"d4d$hA|HX ОזZ^D7:#76wVoEd955iz+ʛhTLhQI CJڠKKΌ 3z1#c͒j{ju@e뀗yT&K-M{TCOV;.]/%w'62U=dᠼ^x22L7 @rRFqE)Y]Z!7Dfl;([~ r 0$xvPr~ǧ+&#gIE:e}VhCΨ,G M:RgUwޞk1mML,YH.h0xwYb'%οٓÍlg Y貟=KzAmGu^M6m@n:;:YGFXXT0/62@5)H ѡV0ep'mȦs&Ne4`W(5Z֓o-r-NG_?Y-Eyzey^! zcL_iN]>n:>eeC?w˯% E?%$^q!5-}04GP/ ~!zO u\"fy3[힨 7yn Q=Q=a0;z"Wm_Ұ?TuQ`U# 8hX OWtnlT -`RP뜵%#@aF4 s<Y$k ]֥2b9hm. cBk#tϵz . b؆QfK`ç>u?z׭'PNfKWJ,aό{۰d]uS X`ʻdu]bjI vXBPb %XL&$S,8dI xQ%.1H!vGHWRutep=k91%c@Ye Q.+Sj⬦tyTҫEp8LsT=I}I A,N@JxbC%t F]J\J4j 3xHI՜4js,BZY3>,p,ˊRzH2C2E)TF&DQG K!I/P\C2z\kABb:y%zd-i9yл4B 8WX aȍu4 kIK.MT~O,8 虗Y+uLSfaȾ0@9i 灕DĸYA$ڨXHfE5fUjQyV8!pݒ=Rۘe Z6M$;맼ҕcǎĎfwG;Pc3ugxxկAlH(,h@Xc'+aGlj.AxTh1DuEѳg_d?izp*(՞(e4{Tdiz&m3kgUSH6(1gu";u++cYZH~GF'ێiiYPT˖FmWce[E8ZEJeZ/"[k?|"8atvϗa˧eI^M2ޥFqnmRoEOgkKp)P`a6eJ0{?'vt^uE?N^i:iԻ*+ 8 sŴW8^ޟmӛ?I,q\JMt8O@zfni@?(iJ{GgwS]Ku_zVzsN%yŽy֋Cr -fzz,Wv^f҉N(`^q Qly]z=, gXߵ~!0P9jZ B#E B&[ Da<ѣweU$Uy_*B1uQL>ۄ*.Ԗ>"6#S>081 -"t (Ɗ!~&-S#,<":"QIR*&i7O9o\{ rf;AV-<㭳(%/`j8aƼ>SE`bsy&͕ixs1i h )DͲ9+Y%Smڧ̣GSLKRi>7l.Тܧa|=qqߐAOG{vN޷.`.,g~]8nUTI୮OTYcdx.U@XfX !TW*ֽe] FHCR k5i.y O\gHP+ƪu}$"R ҺYFU0^,*&>LhL 5?uќW&}eTcIURI˯P~USLPN1EٷeuvsUjAtUd!ь Frշޝ륙L6[.̶HΨns4Nɥ0 Ì7ܦ=pcp[,F39rYJ{]`Y{qY ~3K>0>_ .o7q(3F.,'%D+2`Fs61֮`Vy( 3SLNx-$Jc$q̄ڵkkD-YJS(94̭ᷦ[K sQ'ѓ[%Rآ)+սbS⒢%=`<֒[ %SZ4?|bv3V<,*\U{|4 \4MgKx*SŬ0HSH&N 3sF&.MUdTFP, IY/"`rST\I9R8a,bbLҶFzHۼ~M>ѷQU__~-9b[ޢ!UdSTBq"pYF1qweqH~la33Xl ʂeIS%^`].Ur* زgdf/8X澯h+kh_*FYۘև<`*X b^lGl*k婠v73jÈ#}LG/Z N o5@ʄ 5U1 f:yJ+Vyj/:[5UMFɆ$Φ+*S!xMx8s?drQ b7]gDt#"?5|+"bJК}m MΈ=(BrYO_vΔa,#_Ęt`J8668eF QJߨjGEYGhiA MPYBkdDh;Y7qv]~Jf2LQST X|/J"E[+d4IHqɳ!N`t : 4ϯE@NY#XZ2OH>kPυBU)(/UϪ+YP(DC V7o/G?iV{م*+-;c 1GՑl\!X]cʷnN%갛Nu(ezF$ FF1@l![B"H^/$qՔ\ u~3""ۧ ?'ME5>v0 ho=ʄ 1dtsT|~2[ɲ5#"cKCl#N~5сr0;dX ʭK7͈D [$qGT{|5-X{̃*4\X4D@֤kAu+c-++كid ctImbk"pjUJڔVhXHy/be8BKU]Z[B5%"!lšr2K'ׯyOsGC 瑝pN(T6V4FPTS%{T&Rs2jX)CNJGb J*/f mXX?d5Aܩ׃,v^{3#yg^8F,bz}KfQͩmZ$ @p8/yGUVm` up G8TZ &ގG]wRިXP/}A}P:q)u.&*_lH(,6]pjKl -b 4~gZT*wֱM2Cy6*7jn6d&cB,Oggwϖ<>9O?6s M=7j~}J-%%s^ɽԨ1i+^xmn~zGz`2pyWqҁE V"etM#;i'7 (M)(SP`kQo dr(T˘udU$ \O<:x[{W%9.9ۏ7鯵⹶qV馒G-T'>_N] Č|H|5_ 02T-&1Iד|QˤjC PC50I4vJc-KY eQǒ &[@'Z D9BԌ04ր![3W#NՖG$0<)Uil7qv7DgK2r\|^Ǟ=ho=7tzsҧc|bܵMx6+DFMA;E$>N99ʞ;[pF*EJI's9ńUDՑƸH$T3݂Y-\_M}/־F*󲽛ѳo(LN&t)k#^UViWN']FD8"4]>j#0H&MIy@ԨlLkGITcFY>O!)A!/MɠS!6D7#"3mDwXy^,:%Yk)M!L6JRR8R 2;Iy }F!Ov]\ʪzͦf6 Ȩ-*qJCbx#tASlz~/dNր푑[=9PG%6(^~&j$VЊۓ;ARKqnVmf}4 zZl]G6.hr`{!d'ě/zzZܾrwAGQI1ZQNY2P{RԹM[)]u4x/HzRW Ӣ*K7/kB_OW˴b,b\.zDŀvaW-׼{tFD7X|u #/K-oxA1e7]\NjK4p>r%Xpv֝dp4!z_/X>>I0NͲrmTbdǧ$!9"S&TB\1Uq4./i_ zi;\bW ?I\3_Ia]y[dEv3,>E{5qV3?lͻL?-&dìlh_Yp8ͰQ=dCNm>v&}WO*H[+gذ@o?{6oH$~\MNnp_v6\$G ɱU۟DϾhL\-ÿ:`Weo\$ȡ̃s=ِSrqLypXn<$>!*.OАLʎ|TS >'vFyj(+7"5dc:+Q!+J 0LūE :ҡIt_'Obvg{JX ݂;ځ\hm],Y"No>Nޭətx"_+_$a1e!&8AQa3Ue0h>F:Vu/Wuy ջug/u_45˓6ޡS#:j}(9_ <j 2Mj /"͵aǙW{mݑ^q7ƿO;be{]+>v:ƥguEo}rbnɞg[wBZ5Wf ]%'# /fYzߎܦbsj&f`V[i٤`6)gli1RNiT:TPMѥ!%")-AKl.-b4z[Sr 8҉Tuk[2y`C6_( eF]\lljraJ)EZ{H^A9%UC)vI夊B[ > z3i'i*7PѶIh¸ZeҺ}^]>*ї2$[2 ۜ:mm&fòΎuʉd\e "xFPb_Ų6*S4Z*EUhG D &tEM:YB@j`"vMQT9}(),K䑂5 (PX!XXD'7b:J&z{O6:OzE5CI\蕓Љ$k *5& j)N${N޽JV5rGkrErfQr&]5!dm=Rb 1<9.;ɻ n#$Y9xC𖀰@$X-ɸ6R Pߊ*e"St6uv(`ipƥPB5NiZ k&jןMɵ} \&gk0N{=!Ol3.^ĚV2Cm-?n0ƽRn"i[K;/ )\-4€͐/&[luV%4P":mQƄ[+JXB]0&XW*}A`RsUJ xo$EkQǪ@)%1nMN|r =Xmp~}6 Œ[ 5 sO۫a~4L&T$*),$P9Wt6ْeœv1%z{'˻ 鷇lU&gR;3)yY)"K|qhX &Ě|v=wlYѐljSF-MwEby>";gcs;+ F6 t5% )poj}qo இ;_|_t f;?9(v0OP^;n32&㧳0悎z8~#qCJJ{*5BzMڋ ɴԀEkS0SW s%d2JS5jQoը@Y _ ͔!0 I["mIY1:Zqqv^&5_~k_.V_^ӏZ-`^L+ŗ\>}M݈Dw1^} LUo/ѿ.@ Tt<"3 \&9 *5j&J, JMI DL(1HB @M  c5dshBFVj ±ɠ鈚c0,*5!ƪ3&Ύ-g)bv,<}[fRo{A행]4!^]~ZqW}r)&MV.BCRsj)5DdcK:6 lM!ErL(BdQ?cTETC֡N ^kC Unlq;K[W)~}B>bvs\1EYjԿ٭gCgvE5tB4xW@G)Gꃋ#s`RIG+#F],}oA5Uu,0c-f!hYFJI 4ʘdA%CO$^Iz<{.xqx,}d͗=Dm''0_^."5KDnS&9z٘AC1%tl8(NYP&tкNgr ]΀tI 3O `Y#(USY:<,'r4V^O;;.!9KnDk/Sf=,NDC-%n~XU`V̖ ARp<}c -6ph`hP?zGv'i+z݅섗z8fAw=^ps7n=^Q@kPU|f(x,.mJl5 xm?$'i+/KYPֵ3\\,ӪkÃ2~ڳ vN#-'[_]ͷ\J̓n-Y|v Zox@Q{͎b2f+4BM|D/Za o&D5˧iǏO<L|c B+ e$'1~~&GSI4k.5([4*6M8 N?jA}:lP簙@\T tΪVQATr\FΚleRYuqzMs8X7xMӴ< #.eY#Z9ЀX`8zB9.ޭp 9o !1V\b|LdaIdcߜ0It$RR Zk 9DRƪߺM:Ea綏xϼk[m߄ڦ#A+^4376uf_=ڏ=:-#ցbCb SD7tA( s &LP>UE\SY}R&0@XA뀔أDTjYdM%&0^&;O!%w/hLil}[:mm;/iػur+#,Q9ʯ gu1p98]ha/YhT'UB\?{WFLW;#3"1`'QB# )J!"C0Om*]w&|޲z5^ ޸YcGHo%)zyu8MpU9zgvhنWѰMH!4:TJؙ|mi(. ZkX) @2e5ƨ[Pʠطiuצ5}5iٷB[X?fX_lP X0B"YB(@<(rw+JHƐ*-DzUA6}dȞ !ktzPN Kqo0OFY'u~G'dǺ"(-0 *jxHB%y֡G/QƠY6)}P:e.PT8GUD2jdtѺ73qc2+d|%yšfeњM2SGg"5:*-GdoL6g榵o2]`a5G cn2s^ؒ Ɛ2ry_Ԅtb QM![g }~L`}_֟O>~ S]D"Ȧ6gH[$9=O^R .(gL UhRH'E_f- ơMCI@g#bS@P!EDjETgߪi#v&Ύ7fk&zv<BS[Cϳ'"7akiݶnb?ev˩眲㭼[(ڄgJ< }Y Kxw#[{7zoO~w烻`}:ܑ~8t~Xv|?'dyGX 3|Cw]Oa+dň>m]1;;}>rc3ҐiȞ>5O5-A|7)A 2"]bXBs!7.V*b„֗:E)H -!dgUcBAL$;p:a,xW2R uTNSozxyxj=ip|LS yEU`+rP. .rJ S.AE>M-a@] $oP/AtTƤRd]ATw8-azCCfWtns O7Dr:V5ߓ*;Z]]f|9p|;_Ol~H0>!X Pف(2aȒ` d@x2롃P'bbLPJ1c`1RAacd]Dc;\6r %՝^%Ջ.ꧻ]tNf*^?.< '-_Z)EۥB2ɛJMzoUe;ښR AX,MEdדĩ `KAorҠ Tמ3qv ވR ;ӌ}}.c?"Mܱ~luDKo_w+TpdPPdǔ4@cRbJ9k ߆$ 5MM4&#]2=a+Ithli0;ӎ}6v쵱=ݤQ;sH Z kB } >YMB"F,bJt  GȌ 9*5)eE}9ؤ$"x߱?L +]1^GL?ԱG#q㴡d#ĬLEO1SJ*J/rjhHz-#: BhgBh*&ϰ=FBf Fe#GuMո.i3-/b/e<:uy*WٶʫF>O5UVkFf0WS\ŝj*t^\TF~3ܕC ]Uq^twUwW]s^spBcMh oeMHrr 'L2?77ɡ&݀'рpHBng7y|7h/T~3= "My~d@x~0_nP.F/f:nHFĈaP^p]}w][X_7Mֆb;N>懛ǻmژd7ӠfjןߩѸ"Th;8J#s(AOΡɮ3Cv"lP8CDk$z:---i՗8X"knG}{J@koڻs/ĵɀ(\(8f2lQ_e>{$" B͑5KE!y"I)"9HܾP xzʫqW,=~GJ̥*zJ3(]ո*tUﮪ-+G4iyzbRm?~_6lQ.~̕@QAG T%2 i :L&ZI%*'C)Sùy*vYr c[ a&L쳓K;1?vxľf~û_҃t8y\H\X5~n Q|F5 ~rɶ_ٗ?ξ|:Nw= ſ_³*?+`.Vgfzk^o&+͡3j'5uuCΠY?xfl:!kF6wIcV>3[?n:Jh=8T2BRIgr_0`[W@upEq#(22<qʸbUog5 3qv"1~ůswg6 ! +A&WlAlG9BE yQNETmH)=o'Lk Ho)c)CȄ%S>)d=1@ڧHƸJ}B]T$t“!->V 9K"$x_̉Ƈe^O)v*FS]g Q2s2ldB/RbYGI-ml1:RzնIvd_ft7VM/D߷ 9Mޜp3PH{E*z*kɈ_zFJψ(#oD繗 _~!0Vݎ_QSF,6oFBK(=ʊ>+{eNySF-G "boR%LB:&4 %`ClWD!Y'\ࡨ J)6Yf&eò0t%0v!m|-G6ML۰vt⨙N\>؜؜wjײfn[PLw0HڥYI{$k{.wŔ`]܋#x[q!@dv44ܹ0TY5VsV/4 gy ֡|3?|7c>._Nf|[?ǟ]r^'U[Uòg/{{8_吶d28lB &j [FƥTgTgT2laF^ZHO]/zգ(kk tO㕁ŶGZAZ4vnR{waCJQ$蘼wNl%JwU HGV0ɵC5$ָ`(uˈHR)Q9(D\A GYkc!'6B5#bWl-{KY6^x3b|oؐ pu.[ gJ^{s&# H*S#׎Ǎ4x:J" )FǘpP;e?AkRIQh D+c0@ELDM+\((ePc{b뉝yӾu4CDzpFuq,\`ZBa#2[*c1l/ҹtdi X 8hq ٳ!d͑[A {=NZhs;2]VNNbEA;uEP.[aT**!i $]z?_^zAlRt"X]BqIū6eԨuoHg0%Vg3&|$dVZJ:5ʢ5eƢ$`JKQөn8i%L'Mp7o7\xu7Rb9\N[FlIAQ`acHuvryGÁ껶cUItbG*4hu͟m ϚN :<_lV~9S~AgYmO"A6>F2"AL e,YH~լrVpA9+(gu`HGB>I/BD2h`uDM0t0+m$GŨ64aam4xY`1a[rKX`춎)[veb2G#|.@3 5]J 1#J&j$r5e(ԁuQYsv>3lGo>">ͷ<"xxk.<_ >E3fT=wz|sɽKvnƦM mz駿p3Oֻ`qK_KwқMT,e"n;B.qn>ZwvS+ڡ敒dJ'r}~ܳƒWsʐQD褳…cAZS֘N8&[ER u M/Ͼߜ%?bŴB`)OunRX}1k+b1i#wd^~H%[ RG' b3F:X}v߾Y],%O,=}?%Am3IH$ɩGJQ8!=_xp@8Tm5 a*柍?T@dqJ=H͍ Zh՜&Z&}eq;5/;v~'?4EǙx&_2%/g5Y4A1'J9U`L!$Tʘ `^xH|7X!33*:krɪ<79*( 4Y$Ņ5g;~|_&\*0 "6"if@7ޣXc\ oRt.cV,sVΨ`M:`m\ hd178<7Xt)Vtko*5g;"~9/#tq:}q5E;​!@JT*6fI&Z#$ %C1S#A.<:bk\<jt싇1|g'7y o位A;~rۛC娉$Rv6Ky /Hu^CyA,!&qCX: >ef!ƌCR1$rX&XŲZG-bS2[ 'P\15G ʒThUUmwM‡@+fѕsCn&IP|aWq%'[┹)ɓ4 Dlwk^ޚ"%k$Xzr&Jj`ф$ǚ*"Vf+ 2pDcy>` M\X9uP$Å RoB;o pA$W;irWHd@d兲.ke8&((]JNZQRx ͚*|鹎 a;2Oeꞯ.=flxw G NZy[8JYIFR jQ(y xr{zhȓLPL>'AVE)If5 GN/i1 ^l%l2dt$&oaV0d12<-q@3?f"|}F{ 6(~9! <!UJ]2%{YXL(kLP1CLtUfcl q:'BQh2xobFs7c\(Yi]urHA^?uCN1r!+CkL^$40bZhYZ{7BenTsS9FdGV t](Yŀl`)Z_.(*T Di 'dլ9;Df =DʲURSK xs p^1cjEx;PT҅,b$݋5yPԢ0%cf)riv+"ܶI/ I0lk|-_`x$JZmϞoKMLbF,K sEQB }*ׄڱ0yw$6f$nI̘8tB웠စE?PzyOdD-, E SJ[*(k?M/&,yB?5Z,~;77Vo"㫲P6yΛ@ vgҞ5: xdW$PI`MĎO7~:{ gtҏ'X7IH/=}CgiUjVL%ir0Z/l ,:b\ئPHk4wZ2[O~+{nwy:{r)*(I]|9/[.W/<'үrr=w.[{wߣhrm? ŻnXh%jV!=_¢^۲^!~%#߿T긦zz${!=*[޼V3@ފtK_4QS|?JOr}6vvh_QZO' # xE V>.g8/k4ؿ__Z-yY+ᡕL.Ye.+7k\CmTtS[|=o+Py&zL`f{ E{y-aLǓ'b{E,I[t &)P;ctX`.[zsRt+,ϨOdyw XִcϒLsF0_>Ӆŗ*KN^z˧.(Yw$or/2y9,&'8461\nD{Go'Qy,5HV1_T^d6dRV2;.Y0C~xI0+2Z~r%[s5zJjbA׊J` rꍌݦg)ς$P,4POXxT,ΨU(:9n/L8j_/Ο>;RhjJLj BɌTUJJ8K)jc};(o*chgL 6ubU1Uc(lM|^ُq>׌SAnq(jcg '{'/t#YVB0r=**S %NʗsBf(Vt2 䢒E{rԂQrؤq(nُQ)_׀T|iB mˑdX,S#\͍U(+^v. M+Бf@AbBZ&;& gDI X{kz'j/rX+):Iɡ:㢝pq{m#(h&DhAjc(f\U%41pd,^q(x#@ضbӎ_F>ov"ʴ7C>O8 ן즣S* P D "M9P#aѻ#x7nOY(B1 {P(S,ń3: 9 6ʮdW-\'쾓Suw4TvyvdWrQs:bJQ :TZ,LۯcyZ> -2dmd3D2m)4G,JTh* `U8m D xtwAWA\PƦ$QKqXdFtֳnypgfѫOI:wХU1.*sȲxDoj5`B 5- a,2QL*S>_p_8ZrK2b ku*M9USVL 6T$j4 1QnT+]^Lvd'mTRڛBZɤ(olR+R&WZ1s p,էS&m f=g T+B1-!a>pvQoh&Ѩ"7Lrm<"2EZby'MDU>YZ" klWcvԧ@bMJ&$G`BcKN+_Y;Z5p2CMzvHH8M@lL&ZGk~㥼y)q !̓ES ;ߌ8(vQ}giȐqUP9"9 OsVlEyW1j!9+UeZ^hqFlZ$-檢P) ,9zW76N0q1%EMgfۤ͠=l ݂nw|3SG.T WR "ZعlP&q)%ђ TT+)HјH>d2Ap˝,vUSx7goPrGoD#󋲱l֥mڡu.)"քw}СZ `00DaZ>fx#?`}S²;z쥛yo%R/PܺTxeɈ2":]pjǠ ;SwOrn#nz_ζOJ~#rܓEg/Kז_eMepvvGv[zw߳6{}_Y`ӯ,,'.SxC 㮇9Nx#3VШiÍ31"}۽= oخs{&GdQuz7[9A߈9c.Py"sS)QDەUp)b0,`R1&퀝P\,SFP *"!fl e1##嘔liU5wi^\+ ^~q][ANO<}(+U>ⷩSTI`6?ې>_Ћ?WY#N(i mk||oƿz*цg9W^UGq.|/>9R֨'b|̅.b`5.c`g*tM灝 Ko_N#rxiqy=oo{aݚf]O><՘kq6Oi~’}PLREBCR1:փ+RNũZlВvj- (PeJ}mc֔C)Nݦ_Y/ԟ_:w·L0olf(oTw/?j;RKłTF,w,%=v }eLFVTk,N*稰ڐc c1Ѩn><.JDa{,͸-OG|]uVqysKi--?>P;"ĭ @@y>k*6U5&B!OpDh|j'`#CЈ6%k@d:Rʆu paYYSC^Pc) ʱJnJv:9Np7gvd}^ OPr]nGBqe9;f$gs9r˫8/vaU9!V1 0Kpvgx/f;'sBu4L:vDU)o*U8fcrhYBJlCYe9<3NGYv1@ʠj.{EqFzKл0@짪j)ki~q_DЇ85MYGD'תF =*I*%Ts!Usm!/֢䭉CȞ}4ESpB cۘ9w PqPņ7TS1Xo9)Hs"#x_hMk* bbr>&+QG]H*T,Yy]eA̕28*i`cX~INIǐE:rbB $r5tUWmlUUtҪ';![ֵ̫?pyw4x-oGLw==TQ!Z$T={z?oy TCDQESm=i2(gK@ZƬ(6}x EA(d`H1lUmL2gosNlr'h@<  ~7w76l;ER {a 7_ý oܻ8Nؑvڄ|fEIaX3iEĄE\h.z@Zs42M$u^ROsY=^nW7]YoI+`̃g3FceA`TT{")JexIeX>XYɨʈ/2KE}04ޮ3>{IcϮ6NJ;Hu"H%&MD+=A*)YJnU“]AGw].j:ա(k=u ]mN޵w{7)6\KUO-@@J# b1_yE210ߤ7ukW:OǴwV* wn$ 4~}65C-=Ju3|;_hy]ʑA-x{Y7e-ڟSb[ySpUɯ,C>\HriSoBd_ER'!H[Gi?rߟxXjg6QiY6IqM2IoH_Zޅ3GYC]0A}~D6ZOyA~tm>}Wm4/yZ~еcQzS\׽?'f镟7jvwW_{&* zc)uO =Q/j{ќԻ7EF)\տv_zg&vm)_\ L"*RbX}!lr|;l8|q/SHa4 :`D,8Wt@JPwRj@]l{9smڥ 'UhcdFwOwe_$ Mĸ%E$ ϊ%0"KX@w-$&evn 5([.[%[9c9ylXp Җz>$l4Kv?q3=rkH8p^b*1o0Zŭ(VGLEՓ qab"ID-_ ڇ h\.>g.Fc(,GN-Lqqo3y]caiw` B)}EnN,BlQzL`y9?zƭNuZg{]}^GLJyr9O<'T'~_կ>iS"3/]9SL4t1YXh>ysY3Y|V{:,{G碰ҺX / ;5DڂAMUk,b0x.} {ec),.s.{g#[2Fl-mF7J(>Ya7]\]@asd~b<[̈́\b =>~u%rnz o?nR9nȵ5]q u՝+@hs7GdOo8XFW{Anמp|хp]v}16f^FfW719 W+(Eڇ9Zpd74-FZ+JH6+d!_yH7n:::2eھ1˲TN\VUgVԬY~13zݿ~z@j2a٨>_eDjU0Fg8t;t۬4jJң4z;Ի":eߞmeuD3m |ZN /nY̸U֟@R/>y?Jǝ,~za" +h+izwKv7k`EBElsX +/nՑ,#]ܩ|6Xįv=j᤽Y 6[vi,@F>"Iъ6"eG6JJ+p+b45;u8j)#\Xsu(CYQz@3~G7ii4oe5eU +0 TҐZiHҐD]NH!.st5WYp81逊#!eE\JMvuR+Kxٵ@=hh9@"Ilܐb`1$K$aE+9-H]ޓ44H _.[(Y9c9ylXp Җ4OQ.u6 ODȭ!yļsgkB.X}" :5fNN H'&xDZA9%fA$"y]g.Fc(,GN-Lq9LU-=qg:Bvq iw` B@nkY'm !(=SA&a 0Y?泚睓ϊϊAd>:e|l0Fo0i!LL&5Ha1TYf<}wJ>2ԫ9rґ- l#FΖϳ>`;3{ӡuY6 ٪--&-o@Cz4밾~YsM9魭|k[0|N5\B.~6wD\04~2jF^`S [mȊFVSn~ȕ7O>ͤb慑a4ovu3oAP1uѥ/.8cS9gC=]>2c &D-+,v('hӾm9˂IZ-h*RE9D;>7drsic\*_Z52KO+&y҈]m KE J`@Rͧ^^[7Y>/G1O1hr_ԹMuQi urv>s'ZGBm{Vs$&7}BѠ\F W1~K,0ϴ2T=o` fҷ6ېT6xWLJHIHJCf&]$-r6 jյq ~~j۪VO{r􋇌f!h_RmcVI%NƂ sF&.M1fQ01K.H`Jh09!\ˤ֌EְJk5UʖutAuᓀ}A\2a0_~3rkv:,1P֋gc9CӲ>43/ 2iuя~Dn+mH]W+##jXa_f`yJD) ),dDQ%T ˭b12/2#j$KGH+XVihR١H9c2Zxm#W}ޤ6-8T2P"E}{v𰀫 _ī//-IJ#+qej0 G[R )@01C2c뼅XW( ɛ`/Gz_(7C"p[e}%^IW$5W|œy1sVƳga9An=Ь=&'])U,ҲتN ڪw7kvƻc/dٙe-; :R rW~|s߼hX" .ˠi|*DsB% N$ Oؾz}=-Aa7{4$E)x1RSMBbrhUҺſЊz^XNy8"Ʊ7xf "iHh] ;w2!g6|j َUW:v"'t2dQٵZUBț`92:\NhQ,P] H+Mm3bv!pR>E6žS{gÌyd',˸kUqlxNy;o|_,O]G}\ufjuM/Q7f͌(,cQ(|aGA2Azet*!r.+7 < ̧ձ@:dTh@_sςM cEh$J"gd& tfc@#B-6gc%rس&Ζvdf;6sZޣ5Hh %E.,@gR(YP&&>Nl:'4ϯE@N3G m\29{dOԚg: :9TVQkpNN͸ӾzjO";E pVz*1)j'0(%;P%b؏.k]v-O: 雗/S:};Yf7oQͿaOʳ[j.&WicU *kNG}ڃJ&<44k )RIt3tMl2Svѓt#=w3^&UL^ w1D)Mœ`hY%u.Ե|cܵػΎ=?ˣ]MR!$Y.:e~9!vht[^:8=2XXR]JjV!X]Ӷlw9i+w}]~[c|s|?Jov/T|mn~z/Dq֗u~/4Q|s<:sv v6iH3ɟgߞ&Fs 9_6SĵB=_~ig~+64)]'L d,0[;kkΛw f%7-+&5xҩ eKoksK^V$]OذDz7yqvr-nnyrW_3W| F~iXD°7ko6_֢5chIͿwzE}DpUhઊK8X`p4pUţaWUZVp•u 쀎HWUÇ*\- !xyzeqZ2p8)݁9vzk|?ZJ6ֳT }ZhO8/d#i$;uN~Fťg4LV,=M$ri _{J&TS }<̗4wƖ?/qg9~l+&>Qrv1)o\FvC{=MD\~¿{toXaa48࿖:+Zv--BXO?QCh̲!qFQFNWO^/o|S$U 9>Acw[6AvOe0ϋK7,r=4WD!tSK75^ݗ%yie6{ 8YxO5-"^dR4xݐ#jdDR[9 +)'sE%0>8;UM ;$#3d甫R$Wv}eaW&pvY5YPj[{:G\9m{IIXd-gLrdk7(mdz m7 уIIXB6km!xDQ9C6$q.iQFT0O^MIm@z%u6ɺ*u8[J෋,e{/nP8!r (BV{H+Bx,UZ=BJiF2s͙! Ƿ"|QlKy.LvYr %ΞyJmW얞f}׆M_&W`lc; yR/@n1?}m|G7i!s ,k`YPt"]_Hd;0bZ$i\˳E"/.?g MXIk)MA6W;׊ J1yVuپ~[=oqN}?tkBkR%]8xD8'p4`f2,XP@ X'6_ؙ n `'2 C9NuℶҳO2t0?B:_\l6cUZgZX0U5sIBa~Ӹ(kQ{gޣ hp-!4Vx^apk7W9oIQ22Gպ&ۘ%LIBfG6=Gr%)X| 1q lk&A%F", sJճ0φ͌pc2,GWSo,j:`>192FGp66Elz~ yNOXXE]^-%b1XJ䠌l\1$j Osy@h.9}7:~y9Ǭsٖh˥I12_]bGxO/pۙ_> 8U!bqSJ'TS04ęՁy/^.31YY3gk)U6&1(KXsB rBmAś0aQr1vZ6גr@DML= Ir6csaS`G-r~}cANYY2{m֡-.ʗ$eDh{׽מLE:삥'mzi{޵qwerҖjKwwJ"GZnW[}u53/eML7oqMɝߓ޲Qy75o/뻽Mt5~˯c6\V J}]?Tn6#m |"g ݏ%s$DJc'M,Y|GσJnp,mR'$1FO-:f,5לgvn==B-u14eM/O~^|+K7οb DCr?>IMBE_F3a-m)MzsB6<}6SYg<6 xXAc:J`ɐZ+1^ֆ\[&Hn4 >DC8c#]]Ut5n@v,ij%3d b3O>7DCH,ISmIQ4j$!RfM(CnCm'W }ʟm߰yrcV|dzq CQyÿaQ=b"_벵NWX5נώ+`09xUr6%EbzZ q) l$ Rqp(= a3P,g,|Q,|0t3Ot>]4?g166GW*;B%pbk"׃gF|{ކ.REc~ vش Sfc΄^w-6,)>>5݈/y,=lv4iFnKċi-gFdb.9R9Y$X@bjf(~x*98 U TEۼbMm06 :䌊QbUSTT7`<6xim͏C FD7#⌈[3bEj]ދm,(VWM[yC10=6n,Daz&gKA6o[(E4a. g7"Ί[c6KE?yj#']|,e?ZA 褨5V%UMG(QP< 0/a#17{?#+V:`njZ^j:8}s ?:Gک|ۼeF靖%;z:e S&2dN!RiڞcX#Wt1:`LL$ 5"C*/y6f[ɆBIEh !זDw,1oMF&$&6=r*.58&m8 YR{%XjiH梉 [p$"1&HR -BC i GtЌ>6g}ߞG6%{mk۬+捰7q|y>K\>z|H/d)TT fB:\GvuO_^r_Y[/G]}|m˝կ@K0Ub #yk$I H-Ԛ ǚ'5fkf"y(>խ} ~.7omWЗ_N\WrVrc7ڦet8J UĻLXRw]*`O9H.Ce AE4a7D\ R5Rmˆpɰm͓~I0הLO'S| dJPĨz+Bk8-Kvgpj(zr0l8<<#z|ڜ4? e~5-C[ܾyD=1\ s} Y|pasj@*;1v *)IɄfaJ;iE6B6i1squгzfhS\cj.IAR2 ( S9`Qm- 7#t=>O:@>;k/cIVΖ"?8kѪIĠRJ~oFXS 6/A:5"Ei=bRr_݀W~B_XU80<pTS8c5GE.(}pql%oiQ0t |6c"p{չ{ʼp(j5+5B 1Ua'1//dQAoXdZoǒ4ix3£ǪwD(_{x7dsnuk Lb&"SO(YA7n\d1G6ܜ!2d}ff]w Z$(`1gVimp+7J׵ƎFc^L=~P=̀1ds 9=6Qw?l8{TpS =}&ѵ򺾜JXLOou])|D_7i}]I.E}T@(.yw$Q|\ c%$v%B,^ffG4̂g;D JN]B |#0j_5K=*f-2ul]d$Y9Љ _&UF!X.EQ{JhtDaΖv̌:pL$ Eb#+kBJ%))Dj1U[5l#MG+x+G V>Vsq IްJJcɖLTJ5f5jY':x6gBqE!҂Y\FYV~/|Z!(P C㨎͉p]Ϣ^AC(O͒G[JIz{c/iJT$|6nGDMݗD_6E5>;n0D:\l9IJA6V |Rgrim,ށ*fly3;26Sa3PA&E_XK:\eLL&w^Grw\eI$ɭClF&!EF9}(u~J5[i298A0X)妴6{oLbun$Ix?@^w|94v]bL63S%khlvDj,"@d\t9}UiذE `ߍe-]-v7oo,Z*}zZDR~BIL H&*&N8 ^uo.t+&K]2c:r %t1!^PɻZfز7_%e1*޶4j%=kQV1dc}wu,߾'=}#/͟2tl3_֮S/JY*;|V]@zT>xsޯ_dl)3$7?gaƮ^ gΚR|ˇϟN~?z<;L`.cԏ?93O?|wȫaFRE-$ %˿GYYOS㝳 ;f&=p=[`ކ|x~G skw\?6Pof+?&w:h,2~gi26=4ыOh %d)j] gg4_Nkot7 %^>,_ڥCsuN]^D\W>?0cY|0Oo~vCg(CMQ2B<;"w#f9 1:?kmsv}yƆ,wW8ߜ}gg?>x&JMǩȳj~=RZ>{&wKI&MPW荲z&+U&$ ɖӑ‰%+2Hh7h}ifTUAT1A vaĄI~Gq&^ x2q[.DK˭4$K=T&NP]~řlTn\ P]M ? eac;buDVAL PMhy!+cB>VJQ}!+[NJB>V X!+c|?B>V XѶB>V X!+c|d.&JJT!+5|B>V X!+c|^Hh!+N!+c|B>V 8Qx> G)-<|Qx>N rQ`!!>DJN,7K ~dwKbǓσE5 yQ$ՕU)V^Y2.MI#uTnfȐt#tX t0k^q+):zݔSǂiϩ"hM,(8EoI-jQRF[M^wMn5_N3r8\} 뇭nx>T첹g%9=_rABdH)p!&qK!$GERQ\oe 䴳FF `#VG#$%N8FQ*gp(!ĩ7qvk8 D&,,Z*-}m+J aRS?m]:z?H~$=q?IH{ؑz__7k V6pɵ.LDh]OOFs(~s)f~ *)Ǜǫ#VyǯGo=Nc~䋟^?N֐B?\8pzXF31f($ ABGy-Uq1 _a0sdk  id1*$Mr`(b@%^b~]G.bY>SD"Rmޱ}ylDxZ#UD,:92JS% ZDQnGcxƓe(,Q(lI8H$ &x*Rks$ AH13k&G+˓ü/6jC_1A+-ǫz'PU\T0׉j0! C*H":1_E{zQe]-:2%ԓ@Qߞݕ- ޘ B0Y7=>E`7nƅ-A9 VDGʀ4h=:^$\Bfˌo>e㈜ٽu"uM3;5$&1 ϳM>Z^3σ4_>Zi0%UQOdzl]dvtCpt1%/ᕿOɈZ siGa$!(JuPڣ j~ۀAx=lm3tq]v]=Gg]F7)33֙onjosঃimH-5EXw] Bm66$`mc+iGuۈZ9"k4֖}N ]uJvjCM靈xma:h>f#˴Ah)b#0VVH4V=:6s=bloFUasA:΢%uy񒣁3 "PA%,1=4hz:K<+0rB&  mC Ŋ"BiA'TtirDB'Z%k1$(ة@%zU/jz7ENIǸECq"3mq!T k+h"D (I`&II.:vB:6>O7k_1 m  *  wIPA%^%e|%ŬzAX+Ή\Զr ET hܻY^gp?Ɏ)GE}B [jpıdn(-(I1@O.~3\"8kp!&V.*N3&HO=ʬe0BܺMa &N Hױ͖|Zo|fNd5wmeZN?nfk,;7ǂ^ X+j:4jZ 0b׉6C%t+l:&T͙ Wb;K G_r\Jy5 S/ǔ@D#8#2ךg(t{t=ғ›Gf|`wiy96I;mn\[:S%.3n6/qp0 `84 n/h02E bn0X6NrmA۶W߼>ƷD,e1G9˷5fՉm{o'tSn=.cio^oe]n|!P93C*ˋ8CȔeNrec*W T¦)਷]Ho^XsUPN}еo䕁Ͷ BZJ"pu3+lZ!KT|8=@՚!"rъ$IO\SY ־sgK?)(H5ب)" D '4AH*F ke6B (czfl $ Id =`)h})*K}ך&M+[iϩ7Nk^(M@>|l\pyq.J[Q$B(+!iH7[MO \|6Uw D$W]3u`Lq}*tiMKBky-k#O7.)QupF2u&Jz&A-t5xfjwg FVwDpɤhb2㽑[`OxBeh&AFqtO!5CY'z  ڍ^m˜H @;ןxO//z8MD*VFYQ[4&4cY{lgVKfS::ά< ",^ 0c!>k0i9)S{$Ӕу@,&Ehcp :gsjY `YrT{-2MHy5%ttZ]0.rMg F=fo;݇1ҝ6*Z5oX\B4Jܲ#Z ESϋ^5 dҔ\۔I?pJeshraxQ()(8Tࠅ,àj rV7^>e8?bĤE4 ajZXDQZ][sKn+d}s>9'yهOk2ly(#QTK"qɶi`fC"ZCџ& cmOK-aS*L4c Bw4&y(wЂd$%K c&#: bI!hFDu!L2Iy=yLK/{ f?qmKӋOa${ /" bm@"0*A,eqSk0j5bVJ^A.eϨ36RIk 46ngJ 5N5E8pXd/m,HӎHxJQi'\XtxD8 e).U!yS[+_[AڦȨ))UH@mI X[ '4h 02Uaa3 ec,  o_g|])Yﳸj/tb|6\.R,[aI LI+B N\|e~$}X0i`zGRGW,0 \Uq;Zxg嗋J Y\ߖQN>j-ڹZLw:99ϬE' A`ssbrZgEKdt駳j?|pqger=rfx*?an=??5N;:u|F' O֟iB{"~y:]X\0j}ǿN&?߯ < =B׀0ڽef}i>sG0QELΗ!%fB -oo?^)[; ͧ|hG<苿sY_lgJ#r X`'gۥGTioT)-OJi ŭk|E>YJkZ1:Q\w>TTKtt +*v(pn[Vtã4z>?}/ltGݫX){m}n^go->״gs*&_+[LDA%JmP~%'8Y#K﷨ 6ruAي6K]>WQw;ӽMkU4s}[=bݭmX0W}kKe/Λ;:/AvNGAڕX̰'9{ntt*2PQ/ٯ`SDe G M*FC!{MX$ 2iMFBCP.lH> Mz_k6gfcw9|~H,WJ/{qݪ=ʷݡcrFwtq/?}o$MB{AGP@؜ZϗuP(&fHƁvGXw( ;~{:عu@`w^ZBНlG't}Ve?ٟY?;%Y 815q&{;a\ZsbI'X<^JZ'ZVLZ Ⴂ3͟'Wo%h./k>8&zٵsv͎~WdNwgJ i-2nOvMinrnM;64rBtLpm{_ᾉ'pF4=g=3T;Qk~G55V"+!m7ן|ZX|>{fi=7fƮ3 6iH 6F?<;ҵܛSZ |fV]{nj@xLxW3EfTiW/{NW'\Lhiu)h] ba+@IrO ^\ggIÞ{'d J AVJ9&‹#GN)3(jk սˤҤK6K]68F1Dp(FغV3q+F5]n,o_2J{DqlLBf6#5EHԐ$1qZ|@Lk58dqg Y3zTME pcޘ2(]YoG+f.)##"~,0.#OŦٔe{1}"6f7Ev*AU]_d)8Svdh {.j,>έ -oӃ5[/VZ]bg]۷#q?'L$MFLq\.߳CDc4de1vZcהl9i";vܡ}6tw2[(]x7V $EA% -dlTnN3ٳH]\k=bbhr.s#n.Ulʧ5<ˇrvt) ^GG׿-kAM庥Og&_ָwV w "OfwN>XܹVTb+z_+ٺm0V6kvcirf5cu`KkeiNe^EGga+[İΡX*v1Dݩ*GTM]\c|n1>"*p説 NiݛhAQn:8UA*>tvӑ'eb(V: V#i:uÚv)F`b.L:Al gub݉,b bR@hAk=HRUs))$U;ٱ\6#gIN?OVz޷g8Fe(ɭ*)ؾm':1k܅ c[<@@ dbbM*FفLLN Zat>Q .(\4{  Q^e:0z(rz 2JH 7Mh0‘svJe1CF4h6DlBN YMnؔp\*t0 ~غՐl::m[rIEN;[C^*g!z(@Er3F7,2vH2FALБg>&mZQ߶!$f[K.JUNxDl*2iw]W-kOeY[x3htHF6ToZր/t#ԛnz8riMv7/t"v>::D \js.hqzst )U<(8 fP֤6keTckd{&g3u:7\rѺ9ꢏ8;^&2ݦn^L˧1@4pnCDˋ!$-]Ew*w2u9-x4^G.ьY>rx1[hژ-t[I KZ&՞L*O.+ڸz%@V[Ѽu^bKʤS118v068VPpcTCԜ (tjйdXX*U "@7\RqP|rnͲ\/uY65AM/('ēPTbeUDœr_+0@ՉuUtmߌMNښeӶ9F'N>Tb>D3=qY|E9.gev81-[EzMPPD%>VWSQ)TBL}K>ٰ@u|q/xFމph"\TtBckּj5՘Q H_{&ʅIX-}W :#*hYh/(鹅l[a.[vO.FE;(&Jb>O'[ƨ8)" )^{ zr '..בqDk|g?;bw>}>~NtZ,*uԻ[5M1)άv}3e!YY@gV< Rds̭GRI |]|rst&-e'[gU[R W픋Rr sH rnUf5Ai#gK3ͯd z?;iB-|w`2]ښ >beКf@5b7:11[|0|~e?>wݶDKt>C3M 'f;vw7e:Ϧ]޳(wvrg7퇽ozo|uُ[rp13wW OvNW7 C4;[47G5u[~4{3ti(ƯvϵCCm{7>\|*HE[);#>;:>L\-8j-Xl(1ﰶN Rҧ { ]Я$۹TXӮxny 楰,1{21+/ _LF.RQWʕ"]cħ!Ϙ#.Av!*:(uN:2]T\.cwد$Cɠ$D Y#$MHsYNFkɠs&@x=!}H@jBˁH31WrU(US%v&UDST+R|A_8-CzК?V W3C) Z%"$V#sFȍ*.79_E ܿqQʃNQ׭G)/;@IS7'ɞ\TM;N <=w쪵i`M-8>AEVsdcJ51EO!ǐ[l O vD١TVyxH1 d E',VtM.*]%M* n-ևYub8ɥ)CшǮֈ 0EĿ`k fQ`m!ؽ^ō\ݨe{ڍJcWٿ!0uȵ+VoT=P]yv RWBCbU#Uᡫ++իQW[ N`WͿ{K{?nwbg )~^)uӏ}st1ݚe6w@ɏ֠^WIP|yrڿhrr$yLl,y? 0_&gemgY,=ڥGsj"qhlb?z1+t/Ҟ9g@?ޖ=監Ǔs<5qks\2^ nmop-"T;Jw1{&]߰\8{:=ernd!Xv˵Z)FCW!b(xk椉cԀQ{g5̀űyاu˶d^-CQjʝL]rN.^Sݙpv?}eyՕyz2?|{ժAHN~;[)?Q2dz")_CGNe;њ ::G_98U}YՆӷPpD.b{l,9Z9:l2gJ56gۺ5HkK&B`B .jN[xXo~͉?[LΗ0j_\_wzY7o~ae'2߼>\^yI`{(i[YR} `u΃֠H+VBg*{VNPKp:V&qe=[ݨ~d":X)`n{u%8?j/'MnR<ڤ=|qvsB\'PF^]rn%;Zp/0ڳ_a=ED-i؏_aT/y86kǂcCjlh56Z ƆVcC՘rg >0ܢ] ͯwwHj\*MsI)Wb&XzOx8vG±z ;Y ZR$ tS0&*cT A21!;ւZZe$+GO ."r1hVNk*=9[B6=t+;5ZqL7JNR'[Jߒ1og~{Lv-bcͲ\1|Mg96&E+ <: E-^0W0 L>Pul9ي+.ty Cض妋p9X*hrm-cMd$Jix"% $@npACg(a>?Y=EK)Rf_*轑ɠ1!dk1""4␔ se1V2a:d1$EPbt$eOt2]Z䔵C\t۰ *:I[lֳvDS1c* VLhKG㿉CIlN앪uGLv^58[/]Ovw^|DZb;yxfxScW0 ;mE찌ybm1 56aHީe4BAAҤ1-OdGwHʢh R$s-&<8\Sn"kd>wx&-CGC>Ӻg[H1_b!ܕ9uBjft`6"Ɏ;eX'a{Ҟu$e7!@^cu2R`L!lI,D1ud |C=FhVٽ &a (>.N5?e *0Xr-57Z+\:Ĺ_QfIbۡkpl[W*XY꼷@Fj7s( =\rM"ef\)L*0*h A526~dlUaa/X(G,|R,X"núHttd|u [xorTAj,YaS FJmaN8_$zBoJ"u:Zs?b4oSǰ1j#} -CN2dHE1/66ua3qި]|~*6"nzDXa\l싋1.G\|#.6Ń[͎}5C>5MLiw9ޏ/(]{G\67J( t&lҶC%j$9&K*N!%K!5guB6L]<8#k Q R++r:zʶZjJq?(g-"P&:p%GJKi% YBjLC)_7IIHȘ$_;R9p]eTEHYtņj\b(CMy2DoFmIZ> H1O{gR+.%5IePH" VE%}"J/'2C=ȂKGzLJ +:c"D^5ֳfg<.oJ")l%E>*% 3P`H !kƐyNXu/: T2@Ng\C[BYLʑ 0l:yYD3#jFtF;V^C #IHH J*;9e|4EGY$b1Fhٜx%)&ɍ$e ,ݐ0C1%&`s˨mvDXǔ6(5"*j|WiYW<t2eWҺ^vl%JD2TMh6tTG;hV3d_ځ&E]XS8v&]+r0:d]u0:'Vo74 {Fzʿ6|aCq—f/ 'Ԕ̍'q8叏ϳe8)zKfbS&hgJ}C+{lyfys_|?#^Уۦ7JۮLA?Y$߻"9/hw^'?.Uw~.p.NVNtɏ?e*qUvjR3aN% ^YD=.5 7e6ۺz vN!E%Cgic-8s~YaPgAJ6(K sM!9J8[Rs/ErRh Y I5IImQ+itȖh,k&Ɔh3q7DWryz1P}z]X}v1=VeT_5>{p~(A^x')Jڜ*jކSJBC]$hT&ʻ66_I{~MZQlVVx>oڎq ֈ.-uzi9KB50tt!9:,&Ứ5TrFCgoCvG*&W&NǓ~-[OX \vyA^9@/JL&NT@xGIC*#ϘuԲEe^{gUɃDm2 8eM]s}L3^Os&44tLɒ% c_ ǿ}2WOӁs*?A^'~E!k&ze˽H.)޺IQ픟y﷕:Yw7 +:sr[&aP=+Pќ8tXοJ*Oݼs2D:m5v e>%dn,8Ƶh16E::0&i!X,ۚJH_% ֳ̗ }W3ǏA'7T:39&L`QuVܿ'W'<;x[>&ay>Њ&.R ؀ެ81>G4% Uc!Rެ<,gw@gi?-e(H[C TMh|IŎ{#iX-\THI0%\NcDh岗贓%L$34V*%uA* \+L72˂ƅPoLccuy ;>9$: q|~r_[?p`,.|\l\?'=Y>c^\6@* xx:B>DV2} 'D7) l\E%JMc;DPqj&gt>-PսoVF ^"Q.LFY俋:ML&H@ ]Nc#lG@,߭Ӌ3tCQn FbuD|[?ywY1srPҕa CJ$  2gu!8O͗yA!Odt(TtB YRL-}R`*t5Kum7"Ot=:nņ4 xk}Ԭ׶h=OrjUrl"e&aWl EHf'> ܨʹzc}TeI-tvWт8T2tP"E-xܰlėZxUu$5la\ LQY3TfJ@k?'dҢ VEFI- )AZIz0W;8HuwQV0Î{`iuF1BdNr;(&.gE.+ XggWtZ]ٻihxWBݗ'Z"oN1Ÿt/1]Wi>], M{OkcC?w <Է5;e^ܞ=zj :3^ $[)9 43rx|ȷ`ե1;uYVn[xkoyhǡ&rS3`=&WqϹVx~ꓮL\7/m>u1 ֌z ={7;M-TԵ:qv_wT]Vׇz@?l[0{tW=Cjc7&rr3l_oË؜C5V+O{ºm pH{':e8Mw8\|Y,,;gզ8=Y.XT>`EX)n^!zqs>oy7wW X4=e. к*θMMy=V_[]=]Ѯwv|c bzܒwK~f* .i!F>D*6Lp&*A42JVUr̅4ۮ^dǭ&'sկ?Dx>d㞡r( %nYkuBژbh#5:S9x~(›͍LbKՐ8! Er5:MVsbux_)a0D2SIޕF#_I aƼxz^ UrRJk%K(TWVro<$1(Pg+o?Il%i!$L*Xg}6͜JGI"d!k0‹6{K0".ft̜--w"Y|1P<.8%u>_ڨpأ#{CGt/Gy?$ӧnrs2w-Aww;&|pŮ:ЂPSPMއl,ox8_ 9ν^s;ΥqT:[ ɩe>FSiK//]J-Mh":5vjͽPRQJ @z 7lWz*a׃%Еo t=l{u$1NwuM8o;X)5C,FT(7$pXCC郭h6Q!k2yp$F UH L[cmTZtD1:B( mIJ$2xSQ(gХnfl^[Bge6|yOa /}+x%' ,+OM;rǠ翿܌ן_~4;sq7QrY7/ﮆS1LL|BY#g=8 K8σQVhCЦʹMIe}Kd3a+iZmjxQ򷺕)ۼOԀBNnJ8748 U>>6WIq.B}ZmN]Ztaus*%n~w1ֲXЉbbb?a;bSkU8-+PJ4&#hEg1'; q.K/]t'{z_ 7HS^M;[|:xp:܉Z?"+P S -=RE -R"ғI&D^?vHũW:tusSk8ΎomrJğ+Fr}.]l{^WLPM>@wNu>;RVPFi súGݑ !:Qw UN:+\̈9x](В O[#t,B ,ZLJ|yhym|_Y8tࣇGVΈ  znK]U\xNZimKR{V+B)փrxw>෰5D[S@tRh(Ue ٙ8` bbY卞AD(o )5ndr䨒2 ;IOރl.:'|jv*zs_(-R;fd' ;CٽYmrl7s7\Tl|D96&w`bN䌄:J0X DHU]B% xZ4oz#3dF8N:;5E*+&/DN6 Ium9N2 ]#iMܜ&clqD4 +Vs^ΨhM#bRsl@Pcgg#Ngg 0mfcF):]km672!ї2P٦(ɠ {AʔR49"K >.#.Ljt}xw{a b]ןf:i w?.:f/KA>tdl]P:H"th0tE:(t!_ fbE{5/ʁ  <6Ƚb`9i 9k 5ӥ*pMӅ^M{ =[R.~n>VjWױ>Sc}o59f̎^_f\WWYZ>LJ!*Ά h4No=P%ԃ-]nA^MnhR.?m0UDKRv8kNrJ%`'H N H:i BM($I_w]FϷHҤJ "$m0Jd2gst-xQ{Ҟ%1'oaT%Tf0qvE-?KϱtBsH1=^]WaxUrKjcA _}p{s7(qwLǀTR` ƆonAc-o Kγ[V.;Ml(*|IKE#`Nlf<不BԄ3pSPW'R2@U@ٜGrwv*>1sn͞6,zBwgي nVѓY'a#A):A,#ӉcH%lVF؃SlRYCפc(l/9J2r-73gA:^'ˊCu=>/M?>>RFm@:80ⴏӨ %vvTDΒd${#ՉH?K3;G" v}*u*JO)}Eࢺ b0!1}n69@GſR|Ci2(>!0s.ǃQO.dzlާ \э h[;d$rs; :5Q??lI7 VLrWjl8܂..Վ6 [[f +vi09c&p+ Q F @]P |k{UX Pz܇pl% j#DAmּɩV &=e5vf'Y|aH(,/ jRn[J u?Ch± PTV!l5#d=^γdzX\y"՜~:e" ~۴Z5tiv.n(]GRT5 dF.f@˜P?of;ao[)^@lC|eEv6LaůG|}t6;Ft"J۶δ'E+V j4MG= 3E|(41ջ`WFoF 0oߛ fQjL,P3j3yo(lcv@ZG~gy`6~HtC䛕si֞>/b>X4p4Jǩ¢Ipo#>r,1B}WCp}h遥//|rK`W(4@eޓz9 5Ш Is)B)ЧrgWK3ص#thzE1n9Jݒ} g*!e&~U9/xRŌ`s6SmzQkU(N]%\?b<8;? ?֫I|+%Foq=Sü*4zY%ΐ/*$YuEtgʾ΢:?G'uՇ AU?ȆŸ#JAm% :m,>V/"/'>[v@AUɆF-h4o ~2?N'oiQ̮&i-|Op~,HUS!6+p\f]D_Hŧddh́8[PzTG&~|q1U'T#Ե7KY˜Rkmt7ˡݺ{T|hU 뤨͇c&h Q%xJu* *zG/7\Lh*@[WJ_Xamf෭YtGG}Sl>am-s|ɞ6faÆՒ^na` Ũܟ\cwS[%ZY+ۆ+ܬy.~,W(byi$tA=uE%ߢ |3Kq }t8u'ZHW paY$' +J '(%'Oܠ=۱aFKA֝ FpAstjɸ$}ʩXL *IbP'R+9)!H+$䤎Gv&wdE0az|hKUJ+A8B$Ȋ Avg$LB#ՠ02KQ*EIǨ."Fwg%U|Lf\{5wx^*M+H&pexF!VF`i(CP,% |R[g@*T'۪r*郋 Ht P VbРhjVg {?{ةyD 45$rzL\pk\ `uDpTqg4S1,eޣLVs K6jFFXVD6ID 7*M깭v5Q6nP3ٕx7/{X@J0*t:%aj2*w\8+AAgгGw0J?t(1wf)p)!jfq\z2$yBB.e8Y:_vE젵\to H]%DKǙ-%EEFq]RivJvzGYZї4X@HAMցPBZ%@О8*΅诠wre$'ZR aUYe=O}|>|OW4DJ\j4I4Zp'(jB:9ax}m*cuӕVgs[yطC<)h؛lK<ӅwP8`X#P㔜sW5̅h,dm ޹ٹ?.00`v^\*ՐIk iq9yAtFuaEZSIgaZ&q15&HLyI`H!2k5fYnoX@< w|< 5BUUQ6.r%񻷣m[_/ 㾮d|kjF`>rn(p8`A_+'m0V S뮲-gTҪƓ,SشC\8)a8XA  +5GYJo'=qu5~̟*~̟:~̟.~ \qr5)bВ%H~B40IȢtV)m*2΅H"5&H@xF3>R3[l0Zjۋ7}.LI%J,Fk!uVWgB}-&~x/*k9zi4 #"(6"~BuēdJ(' ^@$)hl+sfHsՂHJ;OuRs~D\ D3ɰl*@|UF94=EC3+(]MxBzi\lWSL]MQ\̅۠6i zom^+M-e%t4OV`EMuY4Kr9yW ϓMEzi0NtRԁ4o\֚н۬ \NZe£'=Xl(TB;9#BRʊJc.(]ԶthZo>WͼE|B S!W3#'%*PTy*$\nȫ6gFg}'˸A=(w.8Ϩ,-nޏM'Oa R6*v~l&F P WgW\࡜M.]I&(RԕEV'4J@fh>9hEo$}jZp9x:x~\?n[t_SV \_؅UڛTdR)qDXSx1 ]^Jx3B _a8j&Ъ9>_⛷}ߞ4 ސGk'[]U(>C|CG-kY,p|V-iL7f&hF Ê{zY3?.-|i+5fFGIIIL%hJ)A 9~{KW8#^V X^1zĹ>SX]:*Vǯ8Q^&2&B@cӈBJeFk-;=ٵH{-wƙ gBJ Ns γ|I*&# ͏׊pL¤/7n$Ia`;̈`طfeG,X4|HV,EUU*lnA2/"PogѫF0LELY#:JT +)HQ\8B/Wսy4eBLoz}ke=>>0}3G_':h=1S3+ەeUW 1T@@>+R׀3a cZ ڗj$J, E%J<Ѣ-2@kjFpWk$[LW՘ jK F!o"hY(Uk0ƪ3FΖ%2)czL=Ͼ.LJݜ {Crۏku?$ĽnJXfƒZƇJIYTK9ʞwLՄ)o(-ZrHUHJQ`^! nl8חubܛم|…)F_ b,_ɳ ҼfNllJމP+N8y'$]/{m=2G2GR/K5d w\j X˜cJ)W#ɪz*ʟVY|{ Fb;YYzz\bz꺧~W`vYѼ o>.lh*>d]\c:Y+V%(-lUQY#E#4^^f [-<\.62Zԅ[򫢲 Ps*DxRu Tg3ՠ6{ +Ȋ:9,ݤЏZLph`hQE=@g /PMnZ;W/m0jN;۶Vk3ɿF~ Ńo`^vϓE%ԒoTǣeqbYh4.[choa'QʃU~vZVõijmgבIyO^߼sݞ0O(,&o*Mj)>}}~2Z_H]o&n ~gAz[,V[#\lbuzz'˱r~B8-!W[-j S+m'O1[ O4"ޭp{q'ͰZ$8ϴi1ʮOqbP}=DvAAZ`;D{LMxz%,Ӟl 7-.OxX\3|8{Ԝ?M/F@=tD'>A1NxzufÚߓɚF=/Άtv 6Wmf 5e;]l͂Mf\97Z9K8ێZT`^frVRJ -w(˅4#R_Qy4+.Jq` ~ztݟjNCT+}<'0q7ᑣ=O 0)&QO_d# 8~1NTBK,ր m5)yqNŐB iq kEs厒e!@ p@/}:C¨V"_ű) xDQ*gQhƓȬԩLC9R,ZUTd}blg9FΖr6_$f~71[p`Ԗ{Yj{Ɔ,˨HhkM% r֕+zV  1 3@O^ӯ*9ZZ 璸BU1(/UϪkH1CnP+ߞ~' qo Zv Vgs2|umΗٸЪ- bɳ8jkO촵':ڻ[Ex3ު8D6FU`o k;!%nKV@2q|.R4R_DDhrC%}s,׍Ev>YV}N .ʄg&CN5O֥-%qfl.x;06˫"8dWcM39X\2e}vd$Z*!aN6yd` ݓM~k_WꐐՁeThdM(crenڿ&f̌rg'ߩ]\èKoW]cV`֦@ҐX`m A) UYܜI%XtuKz@ϲҌ>tZ;\bX҄&(Jlr: #ʤwXS`G;.{]q-O3:RVN.kQi)roVk''Z^gL}ŷO.&MrtzɉՕ;eLI?, SM~]w6!]~L_3uJ{zrD@m7C<_p] .B&?D;@TiH1AU3ҢŌvoooo5-?F" x*Ek&Wa d IeZEUZ6QmkP=Fe4!b& MhI;wȾ9C;2HO>wnC WM ϯ Ѩ} ~8ީTN4(8 a5O)T=Se}psʫb>uų" Yvo3Ƭ*Zelh~Em +HU-̈,3b :\Fwt fJƱ8m7UU0NeЈԑY#R A؄!E#a_8 Zҵ*л95.]AfI9Iq-" Tvɕ:]Sf4XG>\Br>tQ~Ӑ`r{* B+fűϬ ցIhBK*TW c%%VWtF:.*w1yc1"` 88r̐B-1:S.Vc}EpUd+YƎHƮ9cE~]{i)"Op+-{9{ }|`p <8wz[;Dm@qPnHqH`7E"-BD,U0DM.dk U9ꐵ3UE\Ss}g8N1ZDj}I+uzz<,4/GYU> l@y9n\{9˷˽gF'M ^ R<{!#&iH~hC&y#֒ @qԻ|Hw?+;>׌fm͠:?~c뾱k!-{EN#ع7~[iӨE{w48})Lk,ʐ  RQ 4KE :!uoeHʐ\>3k=(_KZcUg49im jGVWᴵ<乴_iŝ63Pa,釉 1:J 5 X8b&6"^0pNH"ױM5!8~MMlBpfh˜B.͚{>0@sqQqͧ| 7֭j@ermJRXXFmy]*b(RAf?B$"% E8ZMi2VSc ޛKY^A6E0o.Lʰ>WF*aEtm])azCemfa:0]ܴF-ӑO֋wUr[Gp FRQ[I0dEww2-z=w%vlG{4R#ۿ"띞Wyؙ.yًł_q$8-J#'Gdn94bĶZSd+~OoߴM93C E !3--t^w5#7>ٚ1YYFkV\ C$Db''}$TR-fG=Un3s1 p?QbLH\U*2P ^匍`sfE}\\΁lާU(<,?3$-upoowC5v$򜵜q=9z>Գ[;Cibޖ%ݧg}r聏^^=zX|;='?fe}7'#>rX{u{:|]v7׳َݽ{5)6)j='Ρ[mNewLT޻zugd^mJ|tLSi?!XOZA?1o߶% 9 004IY?*ccRueK{ jބ9zU @0&K!,aL-b*zUh d\\ݜ|L_);3+8HFSRiB:ԙ /Ke6Aa=oSe) F J+8Scs¥M[?P7.R_,m쯙U<|#Xn2kg%Ld"CW󸬔gEYVax*Rh)Ft`+\&%$W2mpDL*iTRc ̜qfX3B cIAʌ_}ۤՋ8,>.Һo!\]>\].n^8b{_HC ԉ)`$8R9&-Jy2{ڂS|j2c¦26-qn'\TH$Z/u2g?b4ocAfq,jccƎڝd (%樍51E!ǐ!)IbvJ4C):-3df9NXƚ\D2PME⤺7feR_E$/s8Dl"n#bGĝ1s^,Fk+(DDuPVpd V5ED"E*[&g8c#VgҀV:1̜MGwcl%mbŝ&:K$ 8 M.ɒ B^u\lɜ[yDZx;Oa w2IW?>S#zWvW2X]>чi,241 ]mQi r^؁or4xlIQ`D !OS)թtNosUBGd-#:4̯g}ٺ:Ou̳1'#QeUEft1@LĽB0$e0IHVH|ttVe󷗯xu]+}o\R;6n")rF[Hd%:1) >RcI-| W ltA{z,Lo4Vw='T󽇔F{LU8\dZ%eRwP::#U5F5ZkFWʍՏWosVq5מ \Z˱[Ƀl]G}&+_ ]z~7n fNG ?ǿv*"Lsu?Gcb ?Ued6-ڷ켄ïnHG~:87LHUPطCiE_vV{Y־e.aw욧ebThTBQ={0FeAѧ:D+P%kTSpIܝGA9<ò3.F"̱[0yK} NVY A-aP % ڊ z+"zEH+",!1 $5#xo5†cNQhuݶjVi-kUHʇi-Y !$$ιDo Eи"9 [Z "8DgZbO8\|_ zOn_N1wQL}:){a&Tk0 >2'۸嵮[<Vϓ̱ߣ)!^+Z1"# wݜvKB]*<-X 9 0apQADPlg?([#aGҞmCHU!8zU @0&K!,aL-b^ܺx d~d|cLdL< $&QQT]@Q8SV;-PH:DElL_L#|*⟵?1bTI#sZR@mf@-jX_YD? X9RXTvy.ϼyL&_2!W31j0B+J9jcd }Lc1d'#qH@>ahMCNK D΢*& plr(CdSg8.6x̜xLI+0DZ#qg&i Qh-Q"0ekc!Y`UM;)ȹqѨ'cʖIbUťę4UνAL[53g?"~xSQG]1u6cq6EqHzJ|IYHɁPAdIM!cOɜ[yDZx;Oa {U2ٲW}>SgtͫgӍLedчi,241 ]mQi r^؁or4xlIQ`D !O-A[7-_IZ`.oL%^fk-i9ziΏHʤYfޣ/͎Q(5PZh˷˙{{u Fۊ/vK pس8k~ ϙI;{Hya~ϔtZܧ~o.;IJd9։?&NMX:$Ld6-_%w(_)p;tf+{Hgr1ϋUuZzRcp:"RB퇋\|DžՅ֥lѿg{WJ%$_˨ڕ5H'9N*o򅶊|ַ~IE֓RqCR@9}̓~ɶxGDSTeE" ~ u=-qde2Z7+zJ={e*eJ[Cr.0G8VB|syXy~쬯dr Er@4qY!')r淣GD52G9ӽbj̎l8e=m9dhucv b4HU[)PYQdKF omiQMhIѸT z)-A`ߩul3s:&E`D^ǗK*[CmHc?BU{D{OTR$Hk]<L1@:Ea$J {6/&{#>V3!~Vx#MʱVNhkiCPEEUa=W9|{}N(LJt9KG[ujQ?tj"j|w\G@=HLtّ2W"%~Q阊7'U 㞌e+96:qt8 X^H}058zY}S$dkO\(9(y [gzkՙ^s&4ɋ&yyyϥqHѪu@BH":ltQN^ͯGj[H 3*gRbrA/¿KC\rMdV3{qX%Q+OmX u#%##GvGgrWY>uT9ߦ5"6|\14(h"GeE;ۀ4^;1h\E+< o+[gr*_61Hmp)ߴ2vYbq^o;:3)jm7ۤll;:[:[ʽj4264Z'q_)1!*8ʝ7f1qV{t"iZ+IwՄ&S;͎X\ןLl=_hy䞄/硟M]2C>βa .-\~c9q+AhDP%Q(mZӦÓW 0-΢mxٻu4 :5L֋#)Z b<ݜΙ켯F-XF+<9O؀( Ht7z$7겮 zcKg`6 <[E Hl$8a>E^:%hO)#nܓhU:-([EĖPƽuH ÌМmxoJ/?rXgg!xjP\r)r=׽'TRX0e8 b:?9ū^|JhhO+XQQY''5yu`"L\N NHQ؍ܙDoߍZKmލJ~[ IUK ZT*!1R\u 9rx2;:-w K.] S(ALBm؄('y:-dLKK*,yi9Ls>/])E 6*y *72!rC1͵p ]CXH- WXyGZp4S5a+" c+ Z{;IiGm"k/CͲp\jr^٨PJlDj8PX3aqFSV;D1uaDVYE 62(#drDTIw@!j;_5rKR h`|uUG .)380L0,O>c2(G:)bs7J1 ΎܓI܎$gGb`=o7+V$3󡨾5haXO"6O(uv^bS:$Mݵ|.ꔫ1pR[mADb{)$*`sg$ [͍Hi:>`ua@;x~w[R 0&O[<5D B$hH2Hd9NxziiH?ӿwԅ~(y5S;t7}J6"% %y k:ӎ&S=1hU#R]Ȼqb$_l1id 9Jj -Wen|3K_Pl_/&=Th1=P+( ә{f>MkXd$׿H'g4LPj={OStmf[q&j\t ힹY1PmW]ܛwu̘6g[?l2'dƳIr^1Hs1w GuRl 7eԙ,& miŭ>>!ZB 7xHn *)&Ρ4;RGBJaQ ^+J/qgAolu|?ݭBuΤ^0R]R7HyӔ*IZϛ^p"U' u[MθFL3$ ZFܜIAA [Q^`RLH%j`+(5(K)µhh8z;) V_seQjs~6i"bR9c6([8Sh4vL$i*EM?a/O6 uB! ٗ&.m_@Pg0Ʌg MK_K!2-Ѿ,oGwkzpnyo<>Hrԙ5eUPk6:Vnz$H%*K <0< פ3j:RjsfSn“M0&;taUաeEYٮ] 덞l;HHK1nO!}/7~Qi+[7_&q?L_^pC)]&]̋_;`Ӫ-G}܄+pΤ5bc$7ɚ;jqu*K^PD&CN2~wjGPř=bnQJEي,t4W͝e )H`_? Yu/CAel`gw^hG.R6em);(ww}ࢣK>*%VI ZɣYRE?*u|@JPj7Sog@ W?S~̨\S|xo9'#t9~yNQ;&0#o mm [%k+E :|H8la'IB?܀jM1~-^rD) ?5(O+emgokB*ք>t }Xƨ`1Z#anDquG zVvicB)7j<[GEdQ0A[R.k%!RHDcۅZ#g}ɢze_iWuVKo>RxM3, T$2$ Ϗ`e X,}z̦OM^?SqntwS5٫Q؍\p~߿~7j/~*ynmd0FeQ(Sn XYiYrx9:-1w 6l]6"-Ъ=$Qء!J-=C(ǘE->qtчKE_V[P>L?i/X.WK{]G kGHĹHb)Y,?|6;LLbC"ǔu!OH!o2:9ZqH&@iSqZzj0wnKC 0 @41TS-3#+!UFT]UjEi>$зTLH842assrHWl)}޵$׿BSX~0d؋${/5~TKeRKJu>D%!1>USuJ?!3sm'$~91)Bqdg1{SȢhNdD.=i_sжl.. 1' ^}m>ZM/:.T _.3#os 'DjC_nlcbìL<K(|6i&yi-bE8Es";{ɭ&sOۓ^ΠNcv>dqF i d$R02tU9,{i1N KW0L8b$@&X҈,gՆr)եŖZٹKmʜCC  52@  f$M"pVLQ_fq%%YB4gtL3d d:9P4< 5zU jśNg-|J$+J N;TD+ : T(\6^qTQ'Uښ8т4RI!n6ƣ٣59u[J-):־wKTWޤĭ˽nؔj{4ѓSjKPᓕoJxx\KlYj&>hbȑ?H":-]4KWcţx,XG"Vȡ~D'l_͊5i&:^?Kw $Jn< xl`z"9iWY DrvꑜeK!}$ (gm<܍{?YD˴mlќ`N>ɃJ >2J{ E! YƬLJUJÓNKoJ𮣆)8&U'/st/(~)LO4qaKͮ)]1=^}!:^mU3gsoz-UB̳6dl9hq:2.lzdc{{0V:u2@"ZmyBFh&'*+8'&WDIk602`܆>dAQ [4RFz c g7ma8\GO _q}v= nof%lzB+}76K/^&n/8Ez4G9w:X$܇R6"Qr%W9¾%x2TEBcPQ=!Tm8%x#zfx4]?sVzr aS,_ȲtreA'Q<H&a.M5|F'NH#dG#ّ q0ktW.~,5=vuEaeh1a0TVDd)Y2}L Yo7nn7S0=˰=zZon}ȗv>76}5ɗEgmWfD˃S>rfR:A6f9DɸI3=OF8F(kԨ{#)'tZrZA$pОiheX&93Egz Yg:Rl &T]Gob{7$fvɇ{G$6T* Z5$@"%FeDIzSG7W=oegfZ_5'b3x3-dbY25 o@uɼ;xyƫuO/xQk UJ|U/?g9AUj4^3{)(ȥJ,ce$tԅ\""3҈# MxLh-S< *I<ƫ u#h7\{_a. ! -h)L^aJ ScȐ`Ue.d/2) AR fl"o R* raHKPYV g$!H/)#)*B( Gd9D')VOd>EޕBA`.B@3W]uN5_V1yelf97DtL k L>čNr.fr?'# QsʕmHR&'W"3BlSdR3g{;!c,SE~ML0B;]hx8zw Z/܁piPP|FXYa ޵?JJL~2`RpiKl6\k4ZqG;g{(Œ oH.ڑi&idmDH*vC\ZoٻSOkteأe#^ [H6I3]C^<̃yC5Iچ[0va0o?}8vaY_`>Qi?x8*to`tKhI 6;{?8x1x0CzapG+u($~Q6@4ٸ(u~SV 7[0 XFr\xAYƓܟ"z !f2y /|T.gFGMJ0\9[7 y^Zydu!h0A\ҬGDɩMB`Ĭ6=xu _CDcYsr﵁ n'CpgM#D_Y%x s/d%K>GLK~hߎE) "47\QEC$K+EUIez>ODӵt=Jkčc#V9ܺY|X* =jT~|ڕ+m`[=G]wvMtngy8:-cߔqn,nu3ƃADWrłZO'jZFW֭T/h"HEry7ͷDHY4Y# x1Дl SQoGm\HO^XUzգ(kA2еH념vڻ)+l+z.8nj.*@dHJX+ bD2}:0!֡sϓ>O><i\ṣ KOYF#: @Dk\0n.R*XV*.2Y* yZaa256ֵ`7>8y#‹B6o=y!c89ٴlgV6lC; kH?&cʖơr]m|'- #r50S,e@w%בv6w^fm s˘?>㇋vtxbh`f01x:Ʋ=/=g%hJg "M N9_L!hKRD%7``Tk70GVmJR!8DjnU%x3l~ 'wȚ}Vͷ)n>o?3+p5y}[*eݶ.b'~Wgk9R[+- QY=*]s+^製=c1w,m.WnfOX]_Vv_I_!*.yr=L7ojsaדoQ!O[^|͋j*Msţ}Dח[^rÈ_>U:8stz7r;/KOY&s4 0%&Kk$z*:%| 0 @Rb*Y$60+=D.Tlcm 16&I@yxEO_hm+ R40*yI yzyd~A%"T6"sQ҆}Q8@E#[Mវa䃪@Z:mis\l KS<'D*t i@KIަ!x59e +,iV\)҄: 긞x l!M.4jP F@%&NDNDk0IMłr)&Lh}u PpHCBvVG(\gmfnFB"l+m.\|8m} =6|( QL=iz=1E`qJАb +AI@>e}Z8ֹ,óNQ|@*idt$DI a $M襰%d<>$~ ˆxw1;K^d)SFb,2 rR0jBZԉ %PQ5!gq1b]=JI (H .q785_L۴LK?/&?N-_,ovࠨ=/ [72j' ߷dMI ,MEI`F9$r`KAfR$)/F_Fَqp0 X',+>uKeE}V8/Z?_:{\.J-{B$ L4I&pE%s h@)qs*jO Tf¦" a2) [KGJEDcJ'b jf6xBd 9d ]O'+Ca\(=ia} VKDʆ&e 2X9csTgx88 ?2 ǮFD}B"nAu 0֦1SJ]* JR8KƄ\d 4< EDgdcQIWH s3ѲQȑ4QACLCois0+H'\>p`"硬cW< al:mWd!:h9@~4$(:2C'1t16v|> qҀhd0 Gߥʖ)er A=e Ї}R\4 Bz'#FeQH Jd%H> (Y#i%J9Gᅢ<( g{/uQ>sϻׄŸ&KKx|}^K)w#c ];WJtĬ3 r:4XŔ"L C N(Ii#A 5DN<8۝ukY+pNY.םyЖěOkmN>5ۣv7}7O㘥 \-bW'=c/D -V́]zz֢H F˩ލ~__Wߏfnu}r5K/7gCmWZ<~NYď_vLVuTKd&t7`}6cf]nUGޝ[wduY]2Ksb!j{sjh*>Gߙ2mm{7_kew%j5U PR ~3D i1]gc~s7]N+>1 xM[*Oc ng >7$!F7h8FH^.8s =A@xzلGɪ| KG3DH*Ku#s=#׿NVў̈%Q[peK_ ~<ԿuB1|Wq#ߏQ]1mtߞ*}ȣzgt]:}쥷7+`Oqz-_pPyiN.-cM 8ւzݬic947֚1bs7~iseh(] (^?  8ɽ1*xͪoFm\_5џ8v8jX`l6uo3=ẗ[?.-.]xJ?_Kߏs(Cndz<43+3nyqM7nJiN{[ Bdϛ,4k1ZSLMKb]s>Eg )`@95b'}^1 f$:xԃZ[dw.nf ۧV 0O~{HTqhP~-߮[>p>-Yiަk 37t[ZzSs5v=_ުgp!4foK2j% <6I~(<2=+P3-~ -tD,h68Y\'lpVig78S)=x<qE4JРw8JSO" fMXHfXoA0> mqwȣ~QWka?G5W@z7$;݇&}^1DhR_Z4 [+Zkt#^}yE KrtLaLBGTqf]ưƼ0z= ){;9!,暛 G'Gwׇf=Ϙ~{kn4=Xcwt#b$=g`H[ۻL_$駉 04i`iR>,'4`zסJHGW,0tC*7|x\UiDݬƟV) Bqyh uu1 {]eh^)жdIV%e )8$%BA Y/)J?Ԗ絊.?~Y HϭoIY`4|ơ_6Ero q/NJ-hDp^+H*@AK/BI; :õS6 f% F J "!$IAdbD"wm=-5(b_JiS4&g3[(Xa.ģB<N"f ܍L$KNyVZVU)|!ץ_&LR)Z@)ZH) &(:RLJgNg> QcdRl>MDdkfYt$ǺQ!B))0C֕dgŢ%3:k}vC/5]ɰA+@4wb(S] ؚ u C(bM1t< l.<,8b+d 6aÈvᎠ}룄D(dC@)+X`<1&͵"UQCAh-D$zxEQΊB ]r[Gv_2P}ʕxLR()Oj3=rY )Єx,6 Vw^{>N(sRB Zc[%cKwSt-JI25mRhіDRZGWH }fN^Dڇ6:}S.RHU}i!бm6e!j$ԓŢS}h5ST:8S׾"fnJ}Q$\{ Ht=+ )D!; D{j%x!,;v4#H2P4@ZEl0`-OXUAEE'{'=:K:^vN#Xe~0P&ܪCZ]ė ڢY cMm̭.tt%).t-oU-vި^џLs [(k)0Ž`HJƬC6,C*ZD@ %X_;wMBA[EtzRcSt"p1JU  * +5+0Ȍ̆veBC{w  QA\R}2ɮ3Jt"T]!z@e gS d/Xd*zbJe5͐jPoB+"X2nPPE|k(P,Lh=4v#]7cEfJWukʃ AŜI' sG !.A f|)*)ԙ5TDť#3إ~.V!joSCAV(J892r^j LڳFQ 2#}@PSQzr{RQvO J H/[.)=#i̼Bիf !ѿ&伖H 안J)&dY\D d`niXEjE},B\FФBufp_ rCth{1#.EV̺HN**&6/ I;L')Q"_ 0by;/,:x|=?>ԂkW f=fM&>Hh"X 3 o b9PT8xi?olJ6#+J$W{HV*U2P (yCs ~?XQ|IΈ ʃVs I"9d^U0P>xmBV5 eiy{'*APP#H܎m }KOEU'US Y_"bΊf lZA]֊ANE"_n~. O&!z*4ʘ} ]{ #BK|u)}ob:EjC$*KtPK|̡cLGuu $$5(},C)9iD*ZuIBv 䁀:赈 )-f}` &$˽e;VE̓pD"(YC Zc~?XdЙ$ fj2RA ٕ P?AjDPqkU'PaRB]$1#dlDH!(6DUb|؀@ VH'Ytg#MGhTf%ݢZMnU{9 i ߤn$BG 4/^m:hIYTtcV-6`=A;G 岽 _R2IZ5vtrlѓХE4IPp$JHu6tk*%8O)'ޮ Ę@JѰ-'' <%2`Crh[SP.O(7"fh8(tR"˕,.TP=`ePˌ`*1#K[Ƞ rw UߴYaKl+TOۅ+ IISL1. [`'] +U@ &eCE5FHmr3u#&ݟAN l?]n'֋%ڸiz@gRGm,RJ䅑WR#9,U8PaLۄV;N ,4; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@ i$'38V;ih N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@:Y'PQ7{;a sw` tN > N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@:]'FMGru  Ҵ<{'PFvv@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b';N ~թb5m5Vo׷?]Z}^bGظ2q pƸdGAg6.qC q$~ CWWQz1w"3]n?s!}; Nں-ã>tO1{bGnﶅS a/zG?OzVz.\ /97l;;զbt]f]Q!rǏw-6ݿH/Vטxgk:C~jFo=܆2lL/ݷg/iX5^oQN]WҖЦ \߼/i1m Tؔ7(_д%7ro@xŴ}W*4cP|Cq5)ї}uiq۷v|`$=!=Х^lov.4}ils\vEk\Fe‹FYͣQ~<g QaBKsZSWI{LIh8d9Voe%F.fYBPɶH UeǑ]>vZI-c}IkQgMtG_Y8^= 2]Y8 YY8B?`et13]ոa:9 ](NW ! ҕFϟhKЕj"F̝LW'HW,vj `m0tEp0tEh;]HŠ Jh9 ]\;J5{uE(w@tu:tEvl;z"ڎBW ~tB6DW8kWW S Z3bPLWCW>3 +"CWWQZ5w" "]hiqaZfOWr%tu:ta;10tEp0kW۹ ̡d@] 5.=p¸s|f4} CtG0ff2@Ӂic?EK UjaG+B;"3] ]iU? w1{"1] ]) X+7 ]\G+5Νe:EHkWBWa֮om9].2]]9+;f"1ΝdNˁ芎_"Yj'vkWK+v=p.uMYy P2 Z[nݛe?l_q-^__] Vc/._=g|]t'Lg>F‰GcπbuwUi/mvXie!g''+nZ}IWH16< o/mqBts|towtv/͹H j7o.D%W;@RśhԹ "Dk]޳4%s-s̳loڙl?kjSʹv㐷yOCl) $S슚J\9̺=5zRP:j(բ vOK[1 ]\7̺=_#Q1]n*>s#Y΍YRtR=ZĨ?Z w|>4}\}kuGzra(xMGz#z̏;]q3jYDBiL|݈ZJ}4 S45̱\֊Gvև$6Id'uG) 'XĖ^bJ>[iof V"HpmmPs+B# Ub "f"ΎBW6PPtu:teG~+rJ%Nv 9 uvXC< ҕR?]v"ZBWξ$[* nf[+;]JÛoOہ qgh}1H( k\sZ~*ˑ̅sdlZoZywåq\fmu-]w^GO:k˧ZuӍ(o|VpM|Oj^͖Ռu74-^/m}뿔|f@uJ+"S%3rP~j~i7gQ_,lЧB냇~ŭu"[[~:~H{.<^:׏ Ш|p|? f}Mi$$?WKdyW>#'>sݟ K]QZ5彟&{Fd`H9!b*E4l&@jwO8efF#^\ɝ# ,X`ZhU V9m-,ǔa9=3$4a]֞Y_f_3ɔ`t4—]1^tc~i2;lJe,[/;yz}`@#ӅC5ݡV?g?ʻWu\M<$@- slGYܦwb}GĮT_n-}^.[ZzI}kR`N h((YXl΍ &i8R*k؀g2HAeo+[\Rc)2uĜ1aR}HF!PFHILKb{,Mz҅jImf*[G^րm<tmve?RQz^koMҤwvmэo7iJ,˫6KABXڍzJI5s3<ĭ7jWc;a%^{x=ViQqRm(GaCǕ=&s'䴼 q9gRO'|sKz"@BW{ӻAmsuL =rv_`7|6M?9rxWP[MѸi|絛H]?{nɴ7{׻Hç]\M\?2&x}u4}Am]?| ?V%ܔg(l=mTbVIH7%s qgBDg koP10@!%Z_=:?6*uoWeu^^/kuN3+/Zl;OLRg"FӢa黤qkŊZ: XqcXWy^L? ̣\J?o~ ,Qh!8d8*JvfE({FF2xR>b0z-cĭ !]֥ȲL{ڌ2fb2p5Ƅ̽kȼٟ~ڔ~pb&lx5!MwO wo@A26s&ud\,^غah$aᣪ|R!l 2w+uW` \ RTlct`xdǤ)K F[Uܪ%vofN?x[jt4v I ,*y$7 ϒ%0"K l0`zڹab4HDPQXDZeZ"C@xX2kE1 ,;[)XZm9愎%˼>,-&\F!D@زÅJHCu6;?o xkk88w^21oTZ E$' r.j 7lƄܘOB)ɿ4a&eQLGF8B.~2k\|)XTuKjVzw9=._JVTkTxM$yn};[{j!:?ÂO;!S. ;K\4QЀj2=ydįw8ovƻ seP*sa`kq݇wNpuZJtuSm{x/ ]η:ϋ kS`|L.@2pC Ld*0\fۘӧ>nE0kۊui&.򥬺?)G8Z@@rD̖{.pfV: }EPLlB7#Ʌs5`ʲˡ(̙(Ek^/CJWL!Q2-:UF'*dWfѫіnblAl1Dh\.Z Z1Ly2Il6 n"@]!wrAqtv'Uiǯw] "3_[s1brBVP4A ɚ+'ܺ8Sׯ>f߇&N>`1T"9W",7 TZǜyv}$ٶnj$$K$r(9(-;"03Ƙfr0d2LYY"cii$CeqhDNdrf!g!l.#T =B cf&4}"gS7q\oH,d6,lLONZh|岾h.U(*blYeSr!4. !i1$i"a<aEFet1{Oޑ&f'$Jd=$ɓ p]&jg?264iƩX(BabCo64y઒ɗI T~&MYo!CUd STrಈe,,9υ9{T3"gkxMtPFmԙ3{ЪRfAgfg?b P̶vcq*jcèjw.JBRK⌜5>x.z L&C8% b45^72yĢH&&OܐȞFGdSFT'\xؘ8&4b*DlL?NED0";Dܹi4JRdZ\XYSD>e(VZr"5H8I` &ߤ c0$dRE2!樅+7C[5&~D|=dAbiɩEb;heMN+xR$g`́ȐN3:Nvܔva<4+@X{6]4wwɮ$.Y<\HiUُE7XP׻A4;bL?6aw0B2}_Q,M[%K>}^;,d얅58 RkZ+,+:el37Gm>iEC)`oTzv`*P]1 V@y\HIL%ҿ+u`aW2P;uO{{sYm9'pU2fLXQ` "Y Z*H97˫EK![]ե{Z|-{:f~ȟ|IkD L:#DKSvvJJPky.?L;@ ta{)i'|F.%Ǩ&\iQ \eWqkeoqi5yFayn*;/oՌ(W;偉G,g:8%4 kV+b 7f` vg<]zg`<^D Qp4E$,y¯"8HP D ց'NȖ6H< p}ӗM +qNFITFIJ_x@R:Up1oAۄU6NªaU5>O^~aG*V]W(;e 9"T{ #]s?ѤjjRʵʞO J fsnr.½Vt9MSۂ?%d7^Lګp"~ӏN+/Vݕ"y2ՏV{6t6 E ixGZ-[kn(}lJë\M ;qUܐ0n({?ojz2Ԥkj{Mad[y_yq"/ؖ6<Ê!2AN1dҏ [aspXz/jq9DELCĖK}9NdZ=\8.Vڷ_<"{Gunx7DMC%hK+Lϱp6n͡Pގ.^q7?/fJŞ!Ԇjt6߳']L凚  NhoxA]Og=a=[kt>M1҆&-OAX$$ϚcA gAD"iw2]m91۲lCVJh3@uFpAsDZKƵ $Iԍr*)#r#A%IQ-GK*țAZ!A&'ud\bS, p`oBkp(n{&[k+\&Dް^הS>hyjƷ:Y\ޥ7Րyάx6IV Eq:{fh0|XV)m&41k$'H ycKmMr;}ۚr,&+0ZD=|hKUJP+A8!HP9#@B%,I+ƙ4DT0Q'\z#MQi)]DD,gYS*Kb&/ȝp WNj`Mi Q-*LX D@ @Dq m2k^_xRb-!VK\LUDdІJL8EUcPZSj;i)>#0ͣ Z$O`F4N&XkP9:k\ `uqB:kmNlu<[ VpndVh[R\s*0 =o2YaUT4H[{;"vK=6E>޲-!p&g\:t0UeFhw;.Fr㕠ܠ1}ģ=X*cez6ChY:7AoyaRV,;<.mt@vpHM^ل9[:R9w8Q '?G?TџH:Ϸ's|ۓg;6᫸ď;^'w qO9ru2}v?黡4A*P.*dpDVE+9V{pGoYylͽYN;*;~?Gd<mzx1 oP||kf*ז @1#sCvwuVշVE1ol\U\tb^3s=oNl~"!x%qX|ƕu f2<)j-?Nyt yp-$GJM.!Z^:l))qhDj*V`H){yނ#R-,-#T't™21ˍ Euƕ'6Z}}li3JP %&2` ܙ!L:QA!T nVU׶ۭ z=leW<~9M>b'-(sY'PJ$~ s)ܞ*cӎW&C\wOTy')p[p)jq4 4dVᄙąIv=LGQc*TG#u` `A牖4$O- g*l/=bkyt\~Z \M>iЛu&.D2 髠R_RUoa;O<*`~+[`oHN;kd4Jc,1'FxI%N9qbLx!i;ĩ58Kz+0K',5,V l'|hbz ApK RF1H@dA:$dVCUJP|p ]eZv%t#+I9S}+ ]e2ZNWr5+EP}JH%vGhU+DJS4w, ]eNW6]!]Ƅ$=+,p6]e܉2Zy(aػz=tzvJ+q)]q|euN?:?+]9 NAOx9O:Yɠ L3‹?Uo)&րo ;DQgB%HFݷn&x=ʸ\_7Bˉx e4v)B2\ }VB*t#+)׼Gtxo*ÕVUFi@WGHW#BU'մt(9]!]Wİ?{WeTu2J.:B@&=+e*ժ/tha@ƶ*ÅUFkw(` %]I#y5͏ǣ< 8 9E'UR,Tu2N..}R|GGg<]jv捆yRHKPΖ/U KZV>h:")rɛkQuwuȍ~ 򛧺@b$\ ɯ#~9mVY}@ΊgroI <w(b{Ad[Ng83uQ5N9Glii[Uw=sZ8r=V.1˓>SJ pRNaw.?r*iWf?F|36ӯO̱"gٯ7L8+v_Z[V ړr <|}!SW /ZF+;oQj1jG0=+lx2\^F!dWCW|͡9I8qu)IOA#gb"ec$ ܢiP=({1ƟY<=v(ގy ._Xx,o˃un?״GO&M1}ys0,N(Z+%6 bSh]>NTC `tt9iEKlYS00jBLqX !Ar6J"+G(n!pRڎe8pkRrdL"%ܕ`, dgj:G_l;kdd%r(MGfVDVbFSmWNfh_*F(X :zpp`%Io*ռ/th^Lcw(I@Ѯ2\BW]Rtutťb2`izCW}+D):]e tut^ҽ WFBUFٵZz]?7|jZmBBW-wmI}2RT ٽC=q}uW0E2$e߯z6I#sXgzz~:^DJdU++t*v( tutTw2`:CWW,hy2JA{@Қ %W;>T2Zz*ԽhEKt)]e*UtQ^v Nz"TہD S7\ic4Yz5ma_bsjeЊ3NUjY^AS%Bi՝+Dkl;]etutŌDu2`*=i/вsP%W=]] ]tpAt2ZNW.(]!`eg*+tjvDHW(Ctʉ( ]e2JQK &yj;]e3`F+H* +M%Ctvge0e13ZzUF)MOWHW Ct 坡 UF+ZȐQqW]^\Ye/Oc>6 {*PQck\4]>C-9Y=L-[5hZ4 i߱KZY:s80F*O>Z~Ke!UKbGӴ Z+;sPN;CG0>PP萉+;՝1RCYob_+tԺ3tpBWmR.8DtiS֝ i=WNz(Mqt\K:DWXqp% ]e.\BwGpeg|W"]UF + ]!`)yg*մ+th7mR.hf2莫=U+tњֻ%Pҕ*!B CWb]+DϕJ֯ ^"]@I*֢3tp`F[2Q*7CWĮyjչھf@>pCJqQNqb=FA'X?SUŻZXlÚ˭"Dyx`T˘ AiDȩiu`cT)A ꭁZ(oŹ׃ϼ[ -3y-;K֘\d?<$XjTw2\]VUFٶQ=] ]1 \tIBkh2\FBW-ȶUF)zDRxVٖ]Z]e=]]&]I*AL`p<0<%,+\kx=vض_"K_H1+t`aW2Pghg _t9B!keAdAm2oYU 6놮oDnޜlןhix/vaNv, gM^$d"x IPR4q<1 yO(vzCQ8)˲cQN8=<,l|>e`;d* ie]@UKR8M8JE;My~D3D(UdeM$hT˚5mHxyLoY^ߏF߭o7yv72o!" OkF$nSXZe]/}2Y~_[,XeM5~6,&7D-9[uG#2iAFE /(-4WDUfrx<{9m eS?^OWA/2!oP}U  5w.0 BEC8pEUQ1HDF sΑHQ { ƃjǹ2D&(A!+y|PAҞ"Lyhp"uDe<95C}K.5K}PNHHHAe_^PP)>+vd=*:4Sƀ /Ldm2,D:#@BqU FscMc ,0P93^N0y JA{mKHaIpLJbt?Z4fO>Y[5?Mu;/~|)C R_ѼU嗥N<{zv$ٓs{φv`u0ȏpBz&8qn f-ځ#R)xN*Qw(]לBLQʹ Uܮ o^gQ,9-;5{tjC[3Q 5 c(. qi\%*XHz<1nmXj=vTF},UpdMB4e_hm?vDTzTH>cjuK{=Z瓰4u2Ϙ3#8zyJ%dY)%D%!2XVE \>8&5%x?ZtG|ΦJK}҅S.A놂Oԉ i(g@"fZ T&F=2hNș"i  "b\p]Yo#9+,3eX`3;hLϼ,xRI^vUbSeҒeʖ\Y]G[I1#1f&ԮYK튚>LڴdL~ -͓Z>Jr S'PX59[!lJ 7e!$6fd :R$̠߮*2*s{f#)L8AO &IzI3%($L U*հ Ee,= Ѻ-ffw6ヲq2P|Ri8#c ޢE>JmNI`U0]FC&XtIĂs(ٔ"I;W[R?KUL͏}QVFD#b XcRRRJue+ `^ڧ E+PKӢ*"Z)čD6+ "jM ΐ>CB&Z$1iZ}B8eW tǴYm싋2.{\|ɌVF"{'YI+Z}&- d :dţaεfǾxh*as6M.y,~|G@yh%A9*˼ z7JF2B ~w5!57 542΍&7VgQJr";^#v@Hy9eed#a%EN(gd9, 3)Pe}!,N4%(erլ$TnXmΩߧc9 HwB`#wn]Nۜu9$!gbVUbИ 1NEY[(QKR$ttă 1 xvX"++q5q+z[uo@vؔ+>a-7Gw6Gw5=ix>,-qݍ+GeP*ÅKZ昱,u<|ͿT`6 4N HDc|Cesȳ#G8BX *fJ% :'2]ah)*g{!ZXn.ķ>l3nޞ\~=Lbno.gmv`~_Ы+ L9Cɼ'u1` 1$rRIڲ Te d;PNIVW84 @keP $d]7Yo K.~y_sn3h7+1K&᥼;7GURIkͮA}c" JVyx$?rɏGgT3X۝IVt818@E[+sQ 1%Z$dvcr>b їb ] ^=DS7v3Ǔ/aJd-=J0x+]f}r̔8b@ATd9="\^/C /x#^(.cJ~gÄ62140XЩ+W2;5[+%.w(;P[b{nJ] N-*慁GD&X*iz/0$$I %whSOOn.c=#[C $]NYzȉe1:qge#,#n 2C#OPZ2?:@ p2r{IzT$'RC$dnzX(o!3N։~Z_M7{PևEj01M'Eݕ. 5,!+K5@(ѷ]Ѕf:񄐲[9HKAMыT愘%0\D̞Ɵ$LE`TJǨc  ɬ9+o: ;w{Rk*^Qj w@mK gҧ#].ک˟ڼ?֭V..܌'_K m1>W)=v-&Q[v{m,Ϥ<vH"hQu< _*o Y%ՏV:A|~k1R4ZZb12ٻڡw i*Iכu׆l/wu$inrJDMwT GqjW_r7ca~!2㮛c?6^[WO%`) t>7|puLd]+wl[Z,)UX܎n/]Auy`$o5opީhχm "мE\dHy{KJeg&˺5M3{ZH}}F ,ƩWk=V)G2`ݟQ$ҙdLͪI&EGcCc~ MDn`lNc>K> dyq2}/#4(89 }4{ZŅeE^g r$a)꬀Y]T eƲ Uʔ'-csZ OG?vښjP0hM2Vi}$y^z*4l/WihƷ/?eʜ٥\HٓlEq.{fDwyNke" }ϝU$ff^siѝl]Z5*dkHL\he#U!u ];\peDN`PUxFnIZ[&T G6)iO$8;YW}d@Х3Bz_Eg#-ވ}t0bpdrTb4 FiN0-Hu-: T2@N'Q1!zT!13"dCe.B%CMjV_ 3zL`4-w׽,]~n18uRuθe;?;ҢUX?҅E˽H euI/P1#חY!f<^4]+ιTBHöJEwX{f:Ts)n굠K8jz3z޾#3`~Ѹ T= ޘ;Pi8Ӭ?ڟ̠G+S~>p N3Il%Xb[L0n6GF츍`L h#`dUυe)+9KQ\مQM]*\7+ϳӽrq#,r33qXї- '_.Z|<q¹03cvՈWW1V'NƓRUp*">)=)ߡ'ҏ/FOdHرU@W#\Kù>A269__\=otz6{ͦZ os}Yd3ʼ:>I`qiJ84i_,|Ȏ gpV8WJQ'W3y:p8WP#k~=])\R|*7^ǕMvqW&"o8₉35a0̚F!wd6u!wLYTBo?=[٥4CV_yNjv6sK᰼iB]|3_ixl;=?a[uMװE~ cu%= xϣQJVB>*Κ wwB9|;km#I"i0[yN적Yl2膑6aIDc`}OI*XQHqDeȨCY}닇o@=[X{noA[y|ws>gꮽd+M]kS݊(4nEjU}dLj#mTjcNmM׳?)5f/12E:Xe}ʹRCJӽ.~1N+݃dwZk ƍi-um 9Z~R_(iR6j]^eԠ?EuCt<Ց"Uwڋ`3<%cNօM,iAO/Ƨ\|](pč`&Ď\)bDJp`gSO@Űr`f̤cD8*VZ!htH e*3[!(` b ތ:Fxw"=dH_($6:7 {jMը%4J1;"8onP.ºQKQE'fD$' GcDIJB6}3*ya rr5lk- %ZpҮD="[ˢt$MHc[oP\ፍ` d)h+JUe ,c*Jr6z_fTcQBgsA9J$rAVMce2!+2]6xѸ8@{'*`#P܎m@}KΏdAX? y Ej$c3 d-Š.kI|ӇU!pZ"di"YrBUF#VX(;KSv&ϨuԆHT—Pw|CP=n;P :7SQc,C)9jD*ZuIBwp 䁀:t kT([|`C"m!Q9MH3{Nw8=/ (D xd.."(Ψ 2SzP] =(Djw7qVUp*y5:L` ֗E+`aU:9 4IҢJ$v'k'QEŬmðh5(U/ yHY=yp4vM@Ƥ㓄 rT1FX^r%5p  J`QLQ%fdi SSCcE7n֣bb + IF4b]M57XwH> +U :r-eZPʢ#Hmr3u#g=v33(WQYj0Q{)pcfjiQc b &_TL(X*-l]Ρ~\뜼Y%ϠB]?샩lajlJ؝5FX[Tڀ@QVTtvY0i/zBLօb4S3A8QZ{*(=uFm(6 qLU ˁ zul) ,H&"ˡ옅jIj;ZGH20DaBN Z&!Kk|>uVW4<坩y B.8!=jm6{qlv$'{rrN#}:/ttN͑(NOu55QOL7ra&jq"UCMfN[Xͦ^D<>Ux5_D>Ohg'r0J ~m= (X+^H%ѻ,$Ob1tVӺTdrɫj ;  1tc['};:o;m2 s&ʝڝm5SS3n1iL./O_WtAV#kY!@߼9㷕:V>hGz뾹ɧje'!ˎOG,9E-$RBigNIeH{dTP&2U~SRLnEUbgp|bTIע;̬}XuVz^g:YuVz^g:YuVz^g:YuVz^g:YuVz^g:YuVz^uu$ IVvFk?nV:נ^Z+J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%U!=%)`"s(`/^ jzEJ g@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+^(`}REe)`u{>x%PT`zJ ),Y J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@zEJ/{zgՔzmo_m wiΗ")fK0Xɸ7%2W}.ƿtYW!\5JPGpE۰7pE溽9q Y++-q ;'``7Y++#hzOfд+Rl:j<]-CRz{x1]ţYo=}p74#[vy( (ׅfݴ2nPŮVEמE|^~O~(wn"EK|=ĭ/Pj:25]G-Ǐ7,asCLxyͅ†l FOJ@Ƕh*Z5k+"?|,i|=l+AV]0/Xg΃5X #HXBᵔL7oAL o_2#/=_R3;~-,zo"|oC˟;|N\>ʞz;jHVci_p$VRCsswѭKpvmiz5nJʋPcD&eUp(r!‚;jA"I 15jF0p(8|]Tۼ=mT7p$#nigZzɑ_Y s=lc=e}lKnIv=7zXv)%Ydٕj%EEfF/(([$YgWWQBzm?Ҵ!y֐qpPdbި 9sᳵ,HP<(TʹUV7W04hM7&ƓPJsWܖtVzRt4 CPPR1ژi wԴW9OgPbVrGvwjGÇ3ɿLV&1HSb U[01Ą*YneOiݾP}}MK&|3rlMVY =:-}m$0K7]l>qAC+:oDSd:b4G2M:$(" OV߹ŽNE4uL?omF_F7ۧ9t5fo4B[vn'_$10̷=zpx >dοc|./9ѿˋNX|]o TTsyˡD7OKcckO1i 4]B!JwdQi:m]MG bhgV377\Z<7_ܓaXy4nodnJ 11&Aڻ`Pgq:?i=x:EW"u?ΓF-YZç6RkĻP7~Gw(z9syyF|F12G?dcW ]X<(?`K|h!qPy}{s}8Z@@Ҋc\)V: K_%AEo3o$o\\O,{Sm0DakjQ8U#ۼ벅gɬ@[u:&뛌N4ȌoH͢W9D-cx[Fc1qV(=VVL}FdϽbJ$RnZrnyH-iSd8\|#'Zϱࣇ1%Xi*bE+7SĪ[)bUg_H/bI.pGd"JJҢ;"03ƘiN0}>vn>'ly&8CB4LK.^ ^`) #Kd3-dVsj YAdF%ȫDs!1Z3*j5qvj(4&$ZVӓZ>F%/?h ˅7P+.@kr)9ܐ. !i1$Z*<azFet1{όt$Y'דBiKzI3 p]&'U3VggUjq/}!}Q}#Z?ni~qS%ӯ0v?]_?_wMYo!CUd 9$=dR,aBXt+Ԉ s3gxqB頰F1c3GЪ\:1mjgQ\⵫iǡ^+{mv`i|PV:N 5KG/3kh}].LhpJh3 H@bȄ E4XP>pD6edHufa5qvÖԏd28sP(+{D{#nAR)K?%fy90VJ b;a}YZo+qWE>w% \ڞ[ %4Ydho\Ny fOm7'.'ڞGG-$]&*\Yd͋HJoSYZ%~((5" Uk}22f9<|=:EP;8{!zN}d& s,:/Lo]g$N$̌,i)ڕE+n-) 9>rz˭f®Ƃ{;wh֨!gq#2q%ʇ) ΐźe'4\Y vߜg䖛eB@^ 3٤R>DVTjigVEZ}qxx#rq(ҩ0!YFI'd:j{OF@m,51!zl}ʘLuġ2Z!{=ԪI=Oza\KI2cr"d+C2BY<=:R={ :C88+_M'I_n:9h8.(ݺL"f㵥vB(B*t:F06V/"BMRSEM}j*rƷܨ<&Ho9(& Ÿ64\Ҩρ^҇dUiJ/ARA=< W%Zru`e1z7- kpIhHYk>SΤHTeBBlJ&FjSqƗq۟BBJELɉ oؐrsHfA3'OGy? @At%yZ*) c'RFI'w.b23t2ߜ:xʯӝ` %kxiLx >.3*' Ҡ&{xV_iDG }}hgG'wوbu݀o{~:F1fo"d6EHJ\+uQȜ\n`n00xm,F8-7':c:Gp(,Vݨ]iN;rǪ6_z^=Q*YP&!Cta*,$mĘȦs&LE4h()5Z7'? ЛO?.-ױ> Jx~h23Wϴ&Ɓ? M}ݧaˮW?77?:ElA>迃bhgGV!nyжla5 o};,כGZſW*힫xlnz80 ա^t_ ׏ I0g*4RKl 3B53mEOpPe :xi!锭dKX0 mtT Li 42&Ys!>C}`F9tJRT".YK1_~12v?O$.oaq&NKdS'_. 'X:kUL6&t¼VG_e*u\iѳIiFvJ ʯ <-LdKU8+<Os| GQ<Z'H6Ybc=v3ȫ2EUaG?-ކ?|Q˫y3s~`omwWh hlUJԒ@~y2X_\auz}%=bCǓO>רoŋ"diΩAfqCesȺBYwϺϕu A,Ζ3絷i7f%LoN&YiқG\ib JbY > b##]>RŽc*%t )i?0L&F ?ZG{qb~JH>?{WH՟eɌȗhV:[|Yh ,odـmL6Tg]Ȩ'Ȋ8٢q7Gb^bF|[ phͺ^xɌ9+%%v謡"{a@ﭡ39t :ޥ>-hA*ڤԣHQ4lVOJ6K-@:Օgu2 * N1#]NAec*$ (Q`RZ uQ չhI9,a\ĬXg-?Ur]]ZnY~0kN*?XkNY֜>rꊄZ!->欀M?>_# o(IrO%ވ->^_~bRtGO/󷚦tWMj҉&gf\z~:0x8i~of{;-+/oW$f取G dX,Vw/D=(]o!I\5S[wZP*[qχ$ݐ-}#ho$.^xKj-LβħT-lχ2<#c'Wiv" ٴR6r[bVo 5mzPki![ŵNӡ;$'ȅ '\-JW)D9 KLl>ہY=|q< zVOѢbpWp}v?ρ>s/gWgqT*,Vwdhtf|l̻<ߕ^]UʼnϬR<;ݣQ ܠYa|Ny+zu%u꿯zv1`/ѶvU}i]*b!{\u~|gcD;ڂQx#.R%'!> υ̅ph( Mޛ,A^h,^xMdHCBHQ2FɩIhc(&'ESɤu{ zM@Z`,^of;[=yVC!/{:RZ!a0+p D^mu Gw2Ont C2Nd;$c:_Сgg.|$T\H⦙>P{TP5*HlNT\Z -BeR)ˈ=@LJtHd"dRUC.1Qe ,H{qIrV|;S!Y9l]a:j~]o1řvgzK+UW!ο\ 'Ǒ; xH4FL;ͪNjK;flin;m`.3x8̈ FˋiiTСsG{_Hls cnK,]2ݥ Fmy y~{<>2ݲʬJnɸ4v'3 G !O畟ɷ|6*rq2m=w/=Dڪq$9<{r;t1?3fMެJAm;کt_)$i8Q*|39h)FRCtWw tu7/)yQ+ [K_ȲFe7t#]偑vdZĴq2N/:?/;I yLZK.(ݘ#5@Tzb(Rsg ) "Hݏ`3dFk\0Զ5"IP_{R,SeZ `?Ui׼:hM0xIk/|Qzck=CAW8X= u(̀f`fڂ}O3`-184+P;X[ežkIrWvGo?l+%Jj~a"oOKlg+dV8`|p"ROS/O&u+U)GdFwݯ[m_ ӏS׺ׅGiRL?NK0mGK6O)XaD{0pjPWjJ"gFzp$Xa'`પPjKzjihwW *ܛ`j+;UW`y`z~7 9f{?Ԝob\PY$]YK W|Bg'et>G.MˑuJ_mxa㇧n}??,-/C[Nj1?MfN2~p7{kϰsyO~v+>;y)"u~p-LJbv1dC$%kZpO0=4OLgbja|#g$YjM.tPXABGFKL˘d5wb5oSM{G5eO,e0Iłr)&Lh}%8gBvV8-,)њh?mA7R惮)}5*S|L?.OnyG7boyH>uɭɭ΁:0Yyj}(+OU[}ji4wD?Χqib|~]|ԢhRTCXnНQJ<{xA۷ Y wҥ} RD k`JtDd^ [BƦA笩6XrN )DHv[T%I_w; Ѝfl}c-dS' xz>:w{!rEFb,2P Pdv!KH ~§VAa{U;[ʞQ%f $)kQjV7#ll]̽:5_;j9p?5}NZ@{opɫ>8o \ 0>6V?MQys1?2]8,/ RLY(7*iUY*5%Y Г9Sq`KAfRIK0HFfl(Wi,cW,TF,|V,Ѩ^՗ʌV4?X3:Vpﳸh4Otzmz:z\.JG=$$E AK&pE%km+*$(~WVP=@@MM4&.±Ithl7ٌ,-ͬcWƨ#j vG/4"i4&kB } >Y "F5Ydh1X23CEG cM"j`YQ`Tg}cvGYw+x#C.O# ] 7ލYue]_)v5QصyyY3{ DNRvd;tB+݅ T,~ Qǂ6cA=-hSS5%&'<Fzänq$d PTF((r% Ӷ+\{Cː$E͎w!GJCAkG rҪ~JvbJXBS]U}0dCn%هO[LjI z8HY#QԐ1lٞ_u\~XAÍ<}e'7j؄/IV d?Ͽ5q `@̈WZ Uǽ{}xlڸ/0|lM  Qv%ecG;ԩ@`,ZiSTC {IMN%d;-/|eoub~#n_}35ާuayNXU1 MZfB{A)ʇ(I,}iSF3mlNg'vK=;~}aBQْ4F)!0b|ȹ_oY/-f'[ܳх+xvbWx$6J8ok#vhw0#Չ ϶+zz 3Ur5mm' o'}_WcjMD[Ss3pUaj1e[&׶CRQRip o>Sݰ^Y;ixqnaqyqZYa?-kv |²YjY9(!Z&-HzFpPycQ1 S:F y:PLSՅBCw`B 9{)1z8iHak'tu#~Q8yGku_q0]\wE|ܢxɬseKJYTT J6 $5Tr& B90g6FӫzF4O1/P%eV%,;ł@J²g/f77N@QAkg/XM!39K"|[.Z$Rx#PN\(IFQomh#!CC P4gw'/"KC(*E'Eͪ|j<'dFYeZ1BR]Q*B*AѨ)07w*0'w.0L[/i>׫O 'iٟnEׂVmo^ VՐ%/?;sZ>h^ inCEztO Ǜ4]yn}#Zzմ]T^ža݈}ہ짟o kno߸/gomb.gm]՝ȉG(oJo#P0tlȾ]jCԮqrxՎ^$eCOZ<:YBu\#Flm#l|ĤB&_N|Ju^&]zuȆGp?|P` n~_6RTmJ+\W\Qg.&)2Qb1T{'>b$npƿxŇ.)RNh=u{Fqsl WfY5uU 65Wӳ:V11 }LBnAQnwtlgk+3ɨ]x`=]u?{Sx=_vnĊfK6^>xʱ`veD 1컎c;z XՏ3s]Pej|*sao?0,oKA'?D//~hM~sdzub+$4(T1RBƈ5v) m dΧdҺ{ (ʁoT: vVwVyDqgI}GU1oYS0!CڒQ?ɦ@ OdT)ZW+y՟#} b|pLPjogu@h 2CBZ[E36&bT)Z g1XQmB 2*zm@"CAhա'y^pmcYYa{+gar#oS3W;G%f{3k}jfƎQMJXo/(h&ePi31 Db9xr$oQޠ1":Yg4`J{Y((Tμ(YZ4mUUP9eso#kiRlj2S``9嬕_OI젽ZýQk0di0ɻ#X(%$$&A", }bXtQ/;E~bgڸdrVSkB.X7B؇*c!jPk0/G?i{[|" e!XiglA 9FEf !dަ8*AIm {mRCn{4>+U|̀F1bVwJMum#ox&%i]mjKzq*7+|0 (8%5`-2mUlb(? 1; .ؙFN<`o"qȾҁKWjX?Ӆ?_ҴO>e]v HM(؄uI,LH\s׈ɫ?8ً.Ww%wj/W[4{vxP45ϑѣee|=ZKnTZ^HK2-)kCE[9k-9wFlzfy.@E ̬BsIhwv@IKe9i*UϭdΦN]IW]/&߮*_?P4sy5.-?`?4Y\}}Ia7kouj؛7kR/Ϭ{ħwqڐ7m>[ ~.B24Vh`Pђa+;Jc[" $+s~KE/G F"%,> ^"Eql*a2{!);$>N<m\UZ9_gqg]_Y W:^l!!U1foL $Cb꒝$l &,Role׽MѳyheKR˭(]{ߎ1OcHhw0%b1;+φPKD$׼O*-yd(\((R!bscGGgoGڣ .NzOC zW:tA6FC':ABNf.a,ӥ0&*,:!5s@I@F mojuLJ|]DuuZ-&_>䒯& ۄmד<ɭ{o}˴g~[^{X:xD-X]WI3~t^NY_?_WCf=l&|]f͹%܃iv;X=ԳF+<[uGsvqYmsЬThB]0+$t“!J[|䠂UF‡D`W*kũ|$ӢrojY|QNIгw̅9r lTB/ t# ˤΎ2vB26ϱFh*-k V] c1h\|F~'恆lX7$ѫs?nedlì@ִl7.kPToZWa#{Sy{㈎9CtOyvt :_\2Z44Ee 8CDk'^FF4]z<<^9D E?tdr~rft NiJGHDHyHr>_6_SCAh\ Mm2Tv_RTfQ/*rw=kDY#5DzFutſϟ4#z*XQyJgS^蹔ejT^^R`,/FILbkZ-*i߁Ґ2 mA$݄BcLŊʘ4}MCR'4ۃ(rRkBAJMѪ$)U&j5$>01åVz mf|Wlt@.Kd~8YHJ0CaDf "uBEb1b~tپ YCP"LBa%VRt@,gAJ5@{<.31}%o֝r’ы|D-0 B\M ISHlwRIOD Z:¡,E H:lEH ZEZ{wmmHW;e##a{6bfeg;6pHHVGoU$E%TwV փ## x5gwL +yNu `%! `RcygzigfMn)L'^-5fu BQXFJ,1祋iˌ- TTR&:5NIUlLjч,;r`؎~hK 7\}?yyE݄31D4 65>l)j9_gu۳;wJ{VpB{Vў37s4)$:|"2Ց$7P8V/nGd\LVߝs 2Z1"R*eIKNzrsS,Mϛ_.@Z ,yٺZe=B6USݹxf g\M;d:_<;_\^ȭ&2/Ho3mt}꥝}`3X*k?Ww7|fyv<{a\jN>_-w6 F,T u׷ Rzp9M~Y6fB7۠_I;KfchW$$ K16&>I |%i'Ǜc3|=ɽ#-މFF $m$4 "}8I 2MBGa/=DcҿO;'N_媫=SCܗr@W)uEtl'gC=}Y=9Z0I+ɨSp8~Iz>O|_*!Y, a^G@vkћӮ\crڕE~ڵZ n8OZ\b\Usڝ:\U+• mrRU5[K O3W}+g׻]o(M-gBVCYt]e Y.; Y IO>Cgדo~^}F5 3ֻRHDsM|_ ~6&7"Me>Y,z6vDs<1Ys/vd YZXYncU *k>KY z-K2~bc~j)Ţ^=W\g5tZf!m<7mg~~kY6or˪dCJCQ& C$ &2FⰡ^>YճRгZ3Nާd3_}ii[x奅'+镗fy'YItK K nXZ8FNjf͕ \!I#b Moચ/pUppU#•4v 6ǎZ'W =+B+#b g+/pUU'WJ=]GRHileoUWZWJ\CҖ>j:{pVj{+4}ZjgIU5WU֞ [zpe${WlB \Useo`R!\\ծGE}qdzoӚ:["9 Un/=[eлvaQ4vhvf;ol"8bahl@CNn^??VJT6ӑƐKFkU _0hhҖIir !_EӮ'W7v$j;NDKQZYgt5p,ɔs^H;p܋lN_<*|4౥.%'ٕ6'6'F+ɆHdb7&"p>7|Yʬdu`HS"H5هؑ$j}$ ((x6c'̜N|ǺVe3W 8s NZcoܴp -sS;^[r{Їe t?2ݖ=1-v#+ƳGjԩvޤBSˆ 2"]bTu:T*hAk(Ţt)&Je ԘK9F sYY-XRVY񄡺N-ZG|N(L1]"6 htMl;3g7en [?R-5fM]i󁀾@,?btJӣjpTs]oZxJؾt)z\7sj9~(|?LȦ7-\ 0FfҌ6݀ҘT f3w*m'!1"ƺU$W& 0Rʨ&MA2u*M>g Zas 2Cp@Hə$%x'SAcܙ9{nnSRO? nb:SAHhIdvQV `I%. N5`*Y+ R*1S`'1(@acǎڙ952fr?\m'F]%䯹oYr$5 p`%?P:is 6px~};\AIﺯ};BZIUqLfݲtʻXxN.A,Z-ïyYJ3KEe!GH3_\I)2jkJJ%QlkkS =9hK"0 b&4h 52vfnd쎫tCPv8`Q΢ڶʌ[bJM,Gb6\.R,{aI LI+B LLT&/*J0g)%:[VPŞO(d R)hJ]2 LNv3ە9;LiCb jw6:Emi@nQ;呉sHAi4 &kB | >Y]cr"x]2%+ExX} VBff*:lkRQcF&dx*Ȣ:;ٍ Qx*8ULjDqɲdS:*%Vʈ⃊R2VX^2&+(Z)": BhC" h2&ge##4ղZ#bgFį登:0.ΚsZggP\4pqŭHr* 䀚f=$4ODDC!y)y51pdsWP<= awQa?kv;s5=^Q,kK`D?N2 d%gB[e[ ҦT~v>e(G@ ߶5Xp}x*~RxiV?[S-yAۤ`.U;TI d" .Qӥ"ßsȒZ{n#x&HӼ_q7jG3]hRgt<}|6*Gsa %/?=ie>O[Xm#%q~r)ճmy5W<=s $~muzypJ}GE^J_Mƍ:oXF"5ƛ/A6NGAڕX̰&9؁ƞ*΀NE 9 沙y":/S$X&69$,j 0QCfCr7\LBkbʈhl:жϻ_{Jq|ėICQhcqHDC=mp1lD:+W+xݱGh<,ɳS4lR(6-ك1*jnC` Ea흙ۃYo@^O&rZokb|ˠv]1(WRV덊G.\4wLD#\ Y MR>I0tt$ēӑq0PK!U'dcʶf+KX# UE>$" hB=-#AA7E7/&S"0ҽE^gq:_߷]A?bzuoqvPa-F!N²,5{kN-JRi@ B˫a< tZbÅB˜ b4Ve&Pك0^Dm[oRqϴdug|3c&wh>%b߱1R-f}+\MfI^-)Yb١ CRaD&% )UrVt:\=3 [C|E(Y K T E FF́xtUzm {A^JF"tpU7P2;qA?bw ӷ8Cj.C%Cz)Z]?ȎR ^xuӏ ͬq`H"8E5C)\sT,L%`0~Ha~PiNq<^{ |XW.'h{r&7~PF"Z?WCZgmHr 6cwwU [o]6H6pTy×!ij(3=55UU2<=:xR@nB i!Ɵ\pg˔h?-a-wZs'֪k/i6֍Xwôm~j\.LG㛺ittuUQY6@=A~?\ùA@GkqT"abՖz _htGo9,0\{Fqu{*AM5#cu=9!::D nIAƸlc<,K)e'c]J$w)zB!&iz}u, :Soݭ^x3oJ%?ϛ~iΘb7f>Yaj ~La43qe뢽Dtg"?dٳ7ToZP҅(%Ւǡ͓u6ɋif{Aղvz3hh֊`.=|wz<̄z fI&7Du{j@8]^G\uғ29|SfҀ<W2\BٓN*K:A8 =Bnlb-> G{-7JeE^gڼHr$uVYq!e⻌I*{e@>oi B*ѣS8OXgl =GBe]log¼ z*SyQ5Cj7~+- S] #zPvqK&DNkvӦJKD,gsng{t:^i֨yXNWI ԃL#VE)9hBYƉ7~"¤* %1.Q'B|:;YW >?r@%wxidЁKlG'8F B!J "@hYK#wOL#6%D˪K)dO3s)2,PYHZV= hpR5zU3U+L/z;{ ,0a IfLBF2$ze% . xDgbG'":"Hf.#o׀w%. ]23OZEJX О&6,JFN:#76֝GpBO:#}:fyY!QVpʛh2(&t]8XiT@A`ɘ{=XFW7QȾ܁sD'l]-4,>^nq\/W+%Jn=Ql M&.{V?;HIY"$+Ô09<W)Nr}{Bn+vQe7 P(#KgQjOl1ED'Rk֌t@2q\|A.sA l$[|,Y>^&KL[#%MNe%[ !8r\Td2{7hoρwɴh6&ZC/E _Zm{ژ٠7d3;}!My,RNJd4~Vtm^_*U. uomo?{4e{޽xjNzJtu]i{+axFKȢQ0d $7UMZe@A]F^ܿHjXx9I/|)+'bik)HvZ-lm~~> [m&/+X(Xm|^'jQ׋qlj WVϳuۢllkͶ״HҠ>5oNjZ(lR 4b(?tbE-R1@*w^T,ӟ^g%zk_1< ?Fij:g-sT2EDfEN' ##XuJ")1mF\HYZK(f(`{n1! 5Rr.-AOM/z 8snos&ث>OXN:W}nU[wTpbɺ ֕NUh)ՙu:pغpDupJ'm2 b)I; xY ڐvGHWR\nk!mtY`d:zdݑC,ǔaN!F%x*]glWlWuclIDITIA GQNDr8(ڨ$J &,);P>@iP5w)q!+QӪ!xczgd$Vs"Ш-Ѳ]FwnuCQAzN9N#h$$рIYC2EʀAt`B^*䑩m=zI٤P(]"MC0s5MvsF;P0@qJ RL邮ҧRo`kכ&,% 7]17n_l|?]j'b3Bdɸ+ąI} G9FG6!L& 3Y+͝iׁKY`d2d_n;,>n9YD ;[B.JcF]qD5y+ =#~Oбus썹'2渵Z=AQ} "㫋.$$),XXcOV&B0Q>83+(8VRH\)LsCF1F̂sɹI%Y )B4Iu𷐦}\1ݲ,ήi& $!Ģ/"DP|,i\3hfCUzpXr۾'d3VIh-!FSf8,`.C'G$DLܗ,>pާa+AL3NV`NXN^r̃di12x8ؒ|࣒d`F/<^xxWb!Ly|Vr+JګD ]9Jd̄/H_ ڇ}X""ym1 *̣ELLIq#lhEr=ȆUd]LJYCu`Cʐ3fbP%+NM<\GFk|w;?߹ۯgU~}':^T$QqEU+璪)[4 `CYhgV(c:Gtf;:z},-X d6zHrkk&Ϊfw#))aӻS+/判ec9rȖSy:bghd2>}x^ڱ:[f{I0W.rZQߐ|Yd:1j%TfnvA5>wtsnW$76Vz+W͠ ԏfР>O+]]FoO9l٬ȊFV vށx䕑a<m7{]h~4̻9)|G.]NV&1WopT.Hs:D[4 (1^]v"k-z#j>³^yW#ф]&E`IB]e0JG.Jkp\{ [rZ,gjޅj\}JM8W?N̎W][nΣ7=SyN1tW;+<2e:sCt>M{y͆0g>q0R,"Ň|Kr^$$[S-g6TS?gtTSKGRˣ|G묉x5~Tz|VŸ/b@TjU`k .AWJ(I*6;Ho\M~5v&:VOi<4a֙uj mZNlF>o ?+t5Bx OEW#lc ڳK_4EVBVQ@_!i%fn%jX}D%p o,oweˍW:Hdbs/$IlJ#K ( 7M,Yd04?c:mkCln2/eѧg؝JOOk%-+'%aa;OXB fYM$3zN5ܖqepLh5|eS y+@Bdž'9fWj=IG\PBlu$eGVqdݩ*M) e.ibjbnbGkbOQM9T,^5%{r**BMTW.3%;{@XpkJUPIZ%`OU3;u bolbkΩ⎕W&2(DxO*oMpH_LPk^ux^*XJX(KY;ZoaVխơw[TVh#b.l:Al zY^p^Н袼Z\Bjȑ@%5| ( ^^H8Yy: ,YCϐ7q6CFwL;9Oގ9P zdZ,[3AG%Fe X}n:Uա5Q}AM9M1D%tP bїjr*^njhz4es2Ž-Aoyl**91npka} ڻc XMm:Sʝ\;~v( &HY6d'JYF[pUtҪ;#ky+ovo z0h8FM?:;'xKп_]_i99ջ-GC!|V;&gwN>+8#Ae1f0ĪTKHBfo!@t([v|.^ X00bTgR2  P*VkX9cm{ƈ%提`;~==Y7 jev4kG}>.o[ c:fo}{0SG:ok!׊6MvwCQɿD jxN]~xyɆAOs:\ɕ|n?_.<<\oo׌ӻ{fߎv/볏nxe7wyi>ӥ.25Ǭ)D-+:4n~nCq}/EC*/Js kUxw1uw~6@\N"$֏1[KiGRs HVHD|V6)cU)Bˣ~_Z:Uki}R}=Q_@<3jEvkЬMz &SrɆ0!EX-Ʌ/3_AJ[P):@;PQ0k_ڄQs!"z-6DKRĎ38֔=Ձ .x.F(k 9ecm{ɰ\*Ag7 v7 ݽo:k']slpz8!? ,; &ZcJ׵hU^n\.t]s/][B;4wۘg#Q#[ʕmtN;b=+ہM\Ulg%\{|HNOuj]cF&ELZhm\Yg]Aʅ8ܾpWsbAWˡ$UFH[U\R69R6jRE9rE <,P15/IU V4TL(?h?ŪB,J4T}YQho!ѲuSkTxg _, .*zWĂ5 h/Ⱦhdb 9X q6{ֻseVD犠7B,2ZIT`1UTߖ7q6[Jo-ĞmlAm ].-J&'i2qß]f-v"r0^4tYIb5-#V΍M5J0K5boy޾P14V5M.YbK]n|k [S*9߷g/q6[ly.Vձզ6 V{@V|D-*3j%`'3쁫N9$JD٣V aȂ E',NlM.*Y]YMUR]\=l XǮl`i>AMTl;Wk;ƪkFeeThZAh^-bTsӃ-B0;&Έ>&/B OI&iN}Mufrt*.`r#S) ȾV.6EAIl\E צ&!C)UU]y9~}ųa}]LX5. j Omڍx8^(PU~*˼L}ΑAkRG@H"]u¾k. X*rb;ήNOP,!YEcPzR`UA*Ǣh7!T,x \JRQ-" 98})9U#Wٿ.V%YAo6z|g=1%sJIe+$jLU9bхZS5D™ *18hSJUV!aj#ښEGEdĽYQ׼}enڱ)ͪm(9~\Qms=&:p=a}Yu;2pqW~!/:MQSG3PGA:ڎYirP@Q6CA3-hS% P_ {I;N@̺bNҡ 5+uԨCȣ`RHK2M3E2 HUkΟ٢.26i"n_'i@qtks kR xDx;oL&bjq[~}fycWwґ EMl΂;ƞ!L\,[z913&}@eorw8=nҲ'|]p=f~g Q @ h(fQgҎr*Nb mmʞc%"ĂGDDpNk|٬Kq[]-@!-hռb`a7mKBrXP 5v5w>Б Hly>;ƛ<*ف좞f,oes 5Y'3+6dnip›9n t?G4Md==<]A/q_twmI@Uvlk/T756g+z$BIP$E&g{{濹;FppnU2LnVs0R`}TlNua@GD8 "$^}9 #0<`0aL8'(yj&B$hߨҕl<2$!AN18vDi3M޽.wͭoL@<ͫ#dO7[I0D9qc qb<5Tcf#t<zFyjx,pCǀDgYĠ@l#I aŐ/`N= NjGFK 2r509H5)0U$ kK1KE"#qDm95J}gRXAXo 2VBqH6vR&v;bH_ēW/m䋔U|0QY(!j8:xKC*|t$G}0;S`uQ`.0Gd mcG|2tv>q]ߛ-*cE<\>szdK RvNA_}/f?o_h1#hU=^vڗw.""=#Ep/#fg˿Gۛ3oX87eW_b>N꞉Y$Th=`*ô^fkt͛o+7(Txd T9^7v[E&K = ây[KODt8qQ1P"c:rn2LGxFM`[P '`ynGL'nEӋ@g<|L'׉A;W`4B8b KK )(gcn@/qEX^]fxcߛUp7PbCX5L{^ƼGʛ+VZT; 2;Ƞ8#Jd$EF*,؜A}bvhDN^4'wf{>kȭKE5JB# )+I0RРME(-HXGYLI8{veC 9k仾3rZ&Ѵ7WRjD尹/fWsTzEQ0ኾָjU⛲Š#=VD.yfrJTJ`͕`hfrv`*CNX G/TT!Gʍ^yw҃3rvbO¢fp sjﰡZN=(i%=>DBvQXPsC ÖN"ćN824J J"΋QDKq?9yqJ}9Uݔ1^nJ@ҥr6qlw{ft=cN2ƹpUeefi'$mv"ҳO_8)~<)m镗') ofUG߆A27z} eVR1w]",A. @Yn=bA?|U|ǿCLlVt5[n$UX-<[}6Thw=ʝ(ZUVEqBf|( :^}:+C&({{[\ѽ8̀Ϡ®I>.ϳj?_zuOcEA/y eΈwJGY IFxhN 0zEj#Epɿ##l^jnu2hq09$8a`"Ze5B$7-Kˌ3u]3r Cxo`DR_hkls_nn(m;yee/"$r#2Sd4HH&}[b NsgACe85Qr;ULG9{ vD ;J|J+e`|yQ.j H bcgAPI(a`HdDR,*UQ 6Ev,gR Ob&U,=zR`F{<1Zof9J!F x $)Mg.ES [#?LHe%*ey"H$t`CEaYq! UcT:S|F;i !hH9ui.(*[)J9e2J60ŘQލ p*E3*v UOK4h,;@%,e\I ӫp(J曈=V:֝Gh{bB{j@ZUXuyO/_˔\qXt u`3tny$Xl7j4t6tj?"4YϮ^VYFz}[5ʆ^9A;˳)ITLryå3@c(RFXpw ]yCp}vu !KI(>L%O'POa v[6-}6oOr}5k$ Xv0wwyT2n/bTweuPʚ L,-yF:H*0_f',E Xm\o-(+|_p %<@rF5(:S4a }&|}>ySc̱|9<og1Ma=~bMY&nU;7w׮Nf J^U)bNS.SG4OEmx@?NYB$,L|vFX̊8C4 RS ORTP g_ `.猒ŗ7%xހӟ`zi:qykĹě[NJ۽ev!|=[a?/k2{#lEI9|vtmtOFhݧm+EOl)o!&F !HHXTض|&V/7| ZUd}G-sgNgfSɋ<4P*}7k٢75 K3O6i b.CVDtʷ@%;| =h ja%Rg>>^F2׽܅֎4ёv 1c*mzcXܤoZpSR~_+MTAd:h t]i+> ݣBwڴ6H`66`H6l@R*h{#Gk>+ ڦ#PgϾ{ݟooYd@'@0ÉK> A'@-A'J6d<N-U_ԨLrJ͙"<0;˜|DM5q5c4pq,$b/cy3JrL(84хő&QZRXbmmOf ^m: )"\k?t-3^Tr{▄MߝejIEBu)?Eł$~?Xe-;<$W 9KT1I@[0Mأ[o,;>u7r5{\mj9qݨ=} )W@d0ph(pոpT* Js9J֒A{WZWQ#\= IW@0l0p}=ZJWb SW@+V{/8H!\I%9vVѮsJT#\)l8pJ*w*}W+ X &r \ dWSKZ3{̏s2=)!A5F0nd~Eg 3v:5A76NAzgS6i\zq^YJ / 0EFXf->O]Qxlary .^Pui_%Oқ_lW_5[ﱤ-W簻H)}n{Z4'"snce(2w3p%hr YYVRƕ=`e׫ܪ!(r]/ 0rΙ&W̹gܖ븕3*T.MFԮU.q.pR)N*iֈz5RxZ4\F;Rw$ٜ:g?.s\|uNe )bi5$M+gtNn jٽJU;puJװ+[J0 Tm=2p2.WBSWbWUj {ǕG1x] W,gk\7M1jݕ*eWATn`Чqq*XD\Ev+4R j'pubgD`TYpjC;RW_WαUS s?5nZOw\J{]]"yI^\A;ur}Et+sKG3 8VԂXXbYxe=c*UIǁjdm8ܙY[Tg_ZX'8urϽJ楥u**+XZx3=O+4Rς+s݁N%Ɂ si"\Ap4 Tn4w֩Ǯh79; N~x*LgoN-i:Ǜ_&<0p:&q,DŽ Sw8تGyk'-s⎓}Ҫ0&Ns8 q:|F98z>D% irh[r{*9J ,<Tpip֤Ypj+U).WЖO.K'ÇR}ͫ|hA;S=2v?O _}Ow~v}sz!M!^Wid3P;o_:K %>?}͟r4~17hm-{xS^'b흼{»gsd8xg? ?70s~;{Q fpkͫ#B:l~{`|&>ƣxN({rG}5 _/薅y^5ssf'g/_gnjko?t/+1P?}p{m [/d+#l%N>rB%9Ϧt*z>f2;(Ơ|ӽ$>ćG߯7 z{\1o8JНm +e#H5(>l%$[rq;J t!lr3z^X fkc;/lF0Q\lh|g,4(B! L˜D` %oT;F+pt\uF-9Z!ቈ&Kή^-4Z@@s** .,vM!:͖J*`?OH>R;J81b5Q3^[P # ;$<d{!>&+*)֩ (JKqOB.}z>7%ΪJ2-G!ptazHT5Tc!^K)nÚHz7'b'SrIjsi"g1 6yC-_*  + tߺ<[_Z h4(w/)mLyX v-V"Q"ŝIHqbQ ۴g wB"=dH_(H?kB4XSu Rq&.Kh }rFAc̼cIF5`!ѿRE(Pla e(0i ҰM6}EՊ{Fu>%=V/O>bA\f O+ DUz0U !0!a%ɠ {L|Seͫ /\`*f=f~Tw#D1ZI|tx0up2LplБMW%sJٷ3hiU`ư'z$;J@6 .w(z+V<@(E&rZsȼb|bwf,Ѷ%f}fXH9rՎG⭰8o}I,pt5ڞwb@vTVW 1H_kS?}sTͼȓ,!Jq؂٧K6"ZwprI{i"! 5y]%J; k`)`1 E"KbCMfNu[i8obp#cSb0>q5jATLzP?PU@V$%br@wC+z6PFZ@'Q J T@B%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P J T@y@9J肔@תb@ZI%leJRY)*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P J T@'Uʀ*F V<BDߕ@%:A%gJJ T@B%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*lR[[*SZ{ @'7C@B%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P JR}Zmhjf/Nriu^oO@CJq̻dz5% %(Fh9p PU(\K_5= Š kj Yu{UFz( )UN\%sUe)th9}Z+% [++%%xW}R!]"]i-+i زbJKC(-2Z3J"]E,Ulʉ]E V>ԞQtuteVDW0^ ]e #]$|}lX^mqVȊ :CBY-sww^o>N>vG7;H^e7g1;Ӂ M3ȹ.0'U{셲ְ?]eHWCWLSlAtU+x)tjw( zWHW\DWXQ^ ]eBW}@:FDWp>Pt%&DBW}*TI)i2(Į2\*K˾UFٷ|3g+e5 Lb*Uhm+@ɉF:AF J 3pE1UF+M*THWHWeAtY ]\J gIb Xb*õTUtep*Zqes2 f(Đ3S|k6z JM_/F㦢1M~n4/9^Or 4̧t.*c0Bi1yPbRا}` v>uUׇWFz! >ďCMŸ?3WW,:Lp ~[--%KW}'%],/oȡ@&Iڻʈ*iίαvGd:_@e7)|Cd+&j\eG3L:3f+Y^[0zճv>`9Z2YCYbݙfc6jI`U@Z*ÜL6﻾W#[#l#됛h[s+ᬪuň "j=zh`C #+PGZʾe]* )dp(2UFJ5B:DHA X]ep "tQRtut%DWhrtj7h;]e'HWjxAtїb*t(mVOCzRT(  UFzOWF:ET ^җ R*egB ] ]x$ s. WR*UtQm4g+ˠ7]`2\YwњOe{E*J5 g.>z`ŏjW7M#Z}`7k4ANWkvjzH+1g T T 2 *(P;~aO`U14٫VtFi 4́#\[ ]ZX* <UAtUt)tZw0^QN`7$*V2\]LZNW%HW'HWR0EAtU+D)tw'S'U+M)tњއ"83Xbr(h[zlz *[.߭2m5N_vVT!x|^[ֶZ~Wֲ lԜnK:2>$|OnK[1r6b`ǃn5ؒnh͑B P;3ZZع鵶-jX,mF뫘 ף7< ՉNZZkMWn.E{wA-#ףXZs%j^_)\,ck/Eb//ew%msyl[gSY5'f_?7x3NsWlμtLM*:߫ ƣR<4"9$tD !D&lpxh_l s~fj:T^3Q`Liտ/u+f&!<"GO=^􍛝G*.(ޯ^թDքZ:ӒWkG:>PϜU,X0bi 5 0kIP+ĸRtEΈ Q9R;g 3 ̤s&%m Y~~v08OgMOLC~KOHt6 k")dҩDDatlh]]]gQ?l#m)93!hVM*DjQ9T$4+=ĂA@W`VXqM]F?.ūכkegT3,j7;MFw3%UsIdI_]@W{7'Z梮^vӚO\˻w_%#@W=,4x5N0qez`lpJSw0ι8N?gRb,F۳ͣ_rwukg@Aze:Zs(o9%ezy.h1KIQ} %4R7AVyi1q԰d1h_ʦrtb*&P=dɵ=Jh'<(nf !%RD}e2HW3W) R&%,*SFݤ'f9<1ᔛ]23vʝytk/F'THBjM5%ʢTnךٝ%'oF&_th$0-#rSKBa΂&u %Z !8d4yQ$L$a=H]yo?C߅Ū6dbK }#s$Ðsf27ǙP`CfpDF1,hKs,6 >I3a8Vz=.r9| :n8?ےm;Xw3}8#3{kѲ-}f̶W Z;!_tU3ο:Z,~bH5WTNשĊq+֡D7ؾjbͽ)euwbkZŽ]oIrWJuwۀ$-\wɗAKL\WO!Ґ5Gls=5=Uտǯ06jD`9w>\1I9cֻC2,'+ٱ[zt:9εs1 d]&|l. 휁9,7`Nyn__{՛{L%ey!Rs+*x[umV-T 8m9_FT-%?r n/&MH+URaFGNc: :ι3VSNa7^\״+ +F 1 IiKR E`Bxz7Y|%8Gߟ%2#~W;&njQscѡ tl߭15>% Ovm'u5Ӑ1KWzeFH>\Qr|UkQ(Ff3@]7xWxIs" 9ҿ뤡! Df Q9%{N3ίczUOyޚ *i攕9y̜Iy,[FFH[ʾb$xiEaXj'7*:6 kW/^\s$ՠDd ;/NՃgNgl/Ica݆ 2{,h80t'-@X DPE>+I`7Ɠ)[oP1r IeHd 1+ 'A"f}e)[{OL\>i>y4c an3ބeT5"mQYǞL,n>ڌl)Qhjo7̱AvSK.[{͑=v$S5yl6Xީ,*~~~|Ht0b$plxɿs%d,Þ T#Wkv7qwCŘvt3U>3 KCt3`wEKtcihv`1{͉,-0)EVJh90{LFcq»قM쬞Csaa=œy]ˆ_hqNl^E߿K!] ] Q!H ^.yR`d{*mQӊ[}ֱ!Z:Dk!;P<"II+d<Yb"ƨWfrRZ#2^+Jp`WU 'lܮ4ϫ".켮zŅv R?/p2*~%Z$N7]5A-"W&C Eb9 o^~5\wDKs{[R 'ATAHFҍ KI' ,9=>[f9dpY&noq(ga*zh:zzm' PrߢDm52~Ҡ y&G*o-d*MRH&D92`3Y(1hmPD`ݲFS~nza钰Xɥd#+ɀg7L3@##>e>ǐm[v,b@n{( I(aY>tv:ؒs9P^1$\)!JOE&xURO;.lANrSv6ݔ~y۱jwLc"pTzL6.TJ]ۨp( */rhmE(FU>YJZW8T}uS_X+tqQq:sWǯc}%? ߦE}wGF㯣Kd>{Z&xF&].F>f[8[; ZXjc݂S{R]R RppHEryկA+`^K2tUa(98mGi e( v=]p@f# !-{ɼpج!KVr8R-@h@ >qϓnx\-'|yKi̸wK+0L{Ldާv޶;i']*Wf5 Rj<*HKd>MZseڭXv%gank (S)h9=?!ZE xː5rC0tRWh~|IUgG NL2q$(ڨT"Cf&9> tGRT OJ椼 هdZh+Jc Gul"I7PZ&aVk!1TmIS L1Ok2bv"fAA'UWS|wHF3sʕmHHL6XT 3-aL2uH7ړA[Ի/V;pY=ѡbc(>Aҙ, +q5/~|!ߕ'n3- }xz<:;fvEcӏxܜU`ޅ$5e S ^Y!n~[“wK}Phes"ֺ=&ˡμ汋oE^/VLt>Z7,s},DX X^݌V7XLߙNG@7/?uo  2F׽0 nn&u2!<,?Wog}_}jtF&sřDUIWbJ`dqӤ_eڥޡ;!SDC2C0 Rp7dR9;nEUˬ{=)Cz︔c99r@6ڮ9G RE`E%5Uغ~Ą,Zx%ez ?x2\5ztI~pV3ox|h-'}q/z`dv67SYs{PorڤKP<[J+ \ݬ,5gF$ $+idqa=*OʁI"k c^8mVN&0K!<ȅ+>j#mI]v-k, zWy3ϥavNqvꊇZ#Fֽ ]@֟0\$W@,%AY&1\r135: ̣ѬUSUKϷ.LI6&"eTu [b iK- T`20$ur'4tfk<ѵ'xN{;:,wh~! ;@NuCU(*ɬP)â9D+s֧)dM!FFt \ QFi16l$G`>9+r2֪]@& ]mi7Ӵ>{+fU~o>2 DG1/1{ uܢiɗOm4Be*+iqg:cbΐ^+t˅`A'>Pj1cK&IzETQAj9c&QI(3H_\ 7n/>y즭jNW5I#۲#J64>~M6Ds<윆2YNWgrƏP6zĎOg7N7& hR;s(gQ'I}H*FK~.tg姘jJN z35mS厚^]mD`[ G%p}O.xH0,E=5]?l-_eDg;Lҩձv@X<~P3ZQLcX=?v GRjֆ%5#`c\8X8ԅ8 avᓺ^L[4{Y_z>;[vvZ!uM0{AgҦe1Ek\&# ck@ <5؈f6l>uƶ P8WoƹXBQYEi V9Ԉk#'CCgxl"zLV;Lmy& 䰖koдC/ Fun؋9]E7ًߥ, ٶQi,{At-OXԺNu#fSx1PP}(l E=nܼqՏW?.]Dt䪯/R切?;o ҁWtrϾFUѥQ !(*{!.lAGB,B6Lj-zR6Z:Am w;cJ4",3D\V=G|s/v8?0~wꓟ)ez|yx1Ժ֊nKiVEq7&ߝr*-.t]M<%9aNuhɥNEAIoZ.F!H}ѮцRldEx0/wQ6Mf;1ew}!Ż]nG~6O',~y::mKJp)TKŤBZsO.TLN68銁=bt%Ɨ+&w]1eP+#Q[ѕ.EWBkmJO&+W@+%,EWB] e&+BS8btŸZS)Z J(wf+++%Uf+ zuE, {*GWK;2FWgñBD}wY}mS6 .^,~YYq]vaT@ eN (\A/wzG+3RvDVc>}x`Qe?'pQq5ݏv,M4 =4 zPtbtŸ.Rt)esS&+ 2JpKѕВ]WBig]MQWW ] 03\SdPh+vueg}8決bࠨ] n9+}"וP{u5]Ye BAb`] .Rt%c=+ُrz묫ʩ3zGR1+O͓)JZed/`U1g>JWBk] &+>;[d=sgPp.EWB_/g]+ܳXi]W}2cF]jK(!κ:Y7ϢZ;A{hQnҨoקgMs_, no/17똔㿨b~[,/Gn"frgm`yiW!Z2zWG*TxZX֧ma_MۦKZWgݣvXU.Wf':-MLX{ynzKj'xW}KsQ~ڄ]h|X9߶3n ԺWjG6/GXrS"6V]%.@U}}ȏ3mUp{M @/+yJ>~[˦KMUjLҍ%ZF0s-?9V~ >\SWڶrQ9B]cF ݘu2"LϗmNҋpSkM6ԂL :@cT]$G/m#lo׋[./7?w oղxZlSIob|ܼ o._ "&UBWW䡩|P#:˭o_훙nwMo]&j \Ւ[c?܏UWZS8AH^vp+5.w] u5E]!x$S] .sPh X؏ì ݋u +Ō6hrg:j:2HJvޖsqҥJhrוP3jӄ.s]h -ӏ2̓)*R;(FWkJѕZrgʓE,it%KKJ+] %κUAtŸP(w] %͗ڿ]ўUO i7,rvOJ:!Ǹޠ[gU013J+@fϸ¸99C)85@zAY[o)zլvsҘ?d0j38H8&5~c/GkGJҏ2ǥ3_Z8w9j+5Pv(wϺ (CAb`0P3@)QYWpT+,FWm)Z^WB}ttE/?G6t%֗+ O%9V&+ ]/`T1ܱ{ LJϗڧ+>KAb'O'9ѕЎ=5&++&EJp+5_J72LA *UJpݏg)kU麲{V=湠 _j׮ẑ/ jU?Jڕ+;Gcvt-T818>5 M pb4ʹt)Zp5 lA`_ΨRp-EWL2EWCWbtriu%g]MPW.JW XdPp)EWL]Wݬ K /GW]1-ue@銁)g2(] m~te&u5]9TLAb`B[ѕΓ)\E+] Rt%A宫 u5A]4Ycu匮ѕ:YWSQPwUme_Bhύ`_Ծ]/c˳|(G7Ѭe_.N6;~zUj'uI{sQ7eBt$vhRFpl{>?.ߞŋ^/^G`]ovX6⣂8'=c\x"Cn3H\Ɵo^ڴ_MkHnonGN!4үNW:?]K-9+nS[ѽ8؀~MN_~xӵ:v:/[v}׆j۞vplh.onO9y6 Ϝ 0֛ )=FkEG>Ч^W֣Ϩha۫(! Cԃ4emW)Dt cvWNVl=G% ;i*'WJJ.%^pNfuQ&xNGAB)Sl5O2\zY)mnf]=5PqIC)Z~t%ϺxlPVv] wiIaJmf]MPWh(JW.fyRJg]MPW֠ѡ ] bFWB볿v%a^6I]o4+=g?\[Jh ֏Ӭ)*N;/^jSٹ6m[vzvyrv rqY /mN697꟯h/nFJZW{7BW[JmiMEowO}vluݜBoo?W\~l._sj^^qcݐz wh "R5f_eݞ?wc;zﯿ;2OUG]+>Yowqsw.^sOY݊{q]EMMm>ݛo?'ΪiFeo,3:FErGs}aypX|g/T  =m~/ޭzbz޽X;:Vc z]'MH5|[#]chֶ5当=_N~x"'bٶhƼz|z-?m]^U1~K֭l |8&PnUZ&B: ΩhpiU>؇5 uhɥNEn*pC!:Ǎokj;hhC)\VIRS ^RZ&\A% \\:`Lc]AM6dT3 uQZ+G:Ylx$e`:)Յ}?{Ǒ]ʀD-A;Y ZaS"Dh,9gH-6-K!R==n{Nw{t[YLf cWId38fEE^3"Z䔄;:nhd6ѨXDed%Uz6lړZ#TEv̹c03.M%Bȥa1N{'h&"0J)=B2萅bnuQD@~ߴwGMT*V81#3P2d]Ob|jŻdfU^܍S5ԺVDC))IV:ѝ(sRB#B11̉%Jѻ)D$6)hK se_'R:a@zi*H9h V\j()AxK dֶڔ^* QT3,tRZ.E#OAQ ׄOQdO])i f;dpN+@]+(IHkCuYviF*,d.Zn@kBjcnu+y*XLđΊĸеYec]Q?s%pYP0THS4Eiޜu0*X*ȀJvTZ2ɷV LI2, Pl'Eb>n!]2U VjVa "3* ȄW'w `< ABA\J}2ɮ3o%e:|=@cYb22yn v&;.jPAޠB+~X0n`Ɛ)u0siM Ed&X@E"3| ݚ"CO&XV9W"wP"(Qw`ҧthC*Kt]/shΞuS!vo顔J r[O րhg]+u @9.Az-r blw*XC-D;+g bro鎤gcyp ( gnʀGk?,B  35)ٕ@?A j@!oqVUhV!X(fa!dE De#@ DUb|؀?vCڳF& h(xn!mVӥGoU4^P^`A-AU6P*t `?5JK"0AlmRiC6<8h|"nBphhӞs)di륪` ֣;V$L'Kic+q0M DeVۆnMEѳT$$Qzh횠@Ƥ!? rV{R@p0)!A/Q[tCۚbCT2'YB 0 JĒ#iV̯;^#@.zhujEr7)DL0r(:flvZ+xI4 ]Xz%a $Ti`/OH]gF@ޙ"B0@,/r-n^NKǵa: F2@[Gӷ?u~MEt#3A*0@K~{~im>.~vU)g _3Pbݤ?ږ_.~Л.s~ŋ)TyAN WgHqBW:=:پv|?nՊx}h\9ȏж~}::DǶ.O@90(R% J,wi qQa39(Z)ʠ @ ; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@O r _=ڝ>k (#'Гt6N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'u9B^3N %z'=.PzNpb'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v=]'P ^I;N 8B@O DB@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; t@.֯ر^qIKM~Z=Y_& 5bK;c\pTcأKm_q Q: a\Уd\Q \Q \Q_?\!J%? \;}x.u|q_K{puh2pu(]*Ճ^;.~KpyJNiyqݡ٤@KXo-^^{`Ћoo-WVgs$ qϵn!ZC0-e LSZ LS|0MQ2L?IVJJvh2;W.ʎEኢt •V$W7ȝኢܒ WOr:+lpBV(Jp$*: pyxŻt|BiǀhwO8nUOY=;ˆ17_B*Lj RsȝX;`sx+=]A]1?,GM/y˿,.oV}w./@4 D:.D) krph)T}4E1=<|T|x^ciZqYZ5ޠco9@n=l =K w dSIG|~z.{[Qx}=q:_, ۬k_{tx?p}N>=].{<;k[F7 \zZI<~kqLX_~ ,ܓ=]_Vo/?Ʒ! @h9?߼]7o$?ײz7ׯ0xeŭ}:'385ZOiKu)0k(RU 7W9V=^\wVkp844.3Vx_Ljs)9tNgqDSϖK$?T|::{>{;ҁ`PoF0PSkCˑG]q^Onr R)GW0] Éz2rc0 ~pڂm D7ofGVO UXb=-17j>8Yse;?4o fcz16jc{3$)"8tE7Pn  ~zȦw=OO_/Ζc-ۤ'積/?gKW;60)7QKochE[rqƪ1)ʦs62tVuMQ2}ڏ߲Лѵuۺщ|TGm1~3 \}T2?&Tip.֍CoߢfɌA쯧 y?p jַxfD)=Z=\9Q^rႆZ ý5U3*095Y8XP6%0\a,ZUO-nb0z/1D H-8'zZ5[CP"f~¤wh9'8ȡd:4cemU_,ȳyI; /KDAέ(Qpɼ'wEA9vf9Tēw W3goYoNym^r-+zfy"f g/fcag~8<x$:$Ylv06BZENHGp<ΎGgvwP7[VW62!& Lfg@=&!vlD( ta{CJdFq2_]AO>[FN#tfMwc}Itx8dcxVx&k1t\F@S2xTLrcJ_^v>0*! …rLaJZ0F&L ^b:Ů#}GMG,->/3Bb)"jFbw3˚l˔t*;cOy0(qĄ猖9%iE}٤nդfkr]m˭m?|- my%]~8kZ4YVGXMX#;JQ$flSyݫHCیȇyӕ#IgΌ]-,hRf7ywL/iG6 ;^̌@őMẌ́-[F=m޽xO;A lR ?5q8Wk =5qλl٢YD!ڗfǍV~yö'.Qc9tgSzQa~1- ~` &R!n_9w:B ݜcBw$('/bUVZv*Rks L&w9j!2UoU )u)):@iu{~qkz9w3q;gL)`V֟)*Vzɘy#R7b06{)So[퓽D HĿ"2$|4+oQF2U 0鬡DVzΊ3+K0QPFN*W&!&1 eCX5/jk^W3g?*1萬Bh|flPg3"Z{r{CeP|4Nc2,ieBଂ` <|{n5=;Yjx.s%ʙyXN3( E q"c١eڃ &3T!PT(0٤R>E+Y5sg_LfTɟɌ:Eg# `>:"t*1yFO0:jq0PFCbBYuGS9Q'K*ʳG, NPZՠV_ #31J8j}#Gy%"TVJ o+% 뵉zG}SIc͈mԔ"/гsqj]Pη[y{8A=Az \&n %_0sQBh`{s5kxrs.QwqzV.?,16D4%k^*J $J*!O69Q EN6Y-%Dm?RO[YpRk?|/)~*t/ja8K #l.YLƊt?'x>6ѹmnBI1%_+ 7:) %6,s},:_+ O:;7h`Ws=uހ?$5>PoȺ6u2<,Q?O/(`q1$ݧ0[)F5"+Ȃ5A4&6* 74Yl ƀɂpxKcS6O}[zFq2=d׽)xS<ȱ|Ρ1VR,񲟥OZ/FŊG!\|}W6k5" }]_viPh"scj%q~{pt}ŷ먃۷IgOQgv=OtbuAӨuFf^VYu` -{P31@S7ߢ*R ^ m3k7Y)D{6p֎UGY=ڋvwt]// tm=:2!-:2up2/=lָKz/iڈˑNV` bC2U`~!3C zëSѳ >d5"_e.JhdY~0ʬS|h뵬;Rr@.u),e)дE%Ae,pϭ1&d^?P\[D9s'G7%g#|or5h5g C@͘8b"UͲ+)_36,YuS)ՙT|g|vH'mQe&bL<@JedhC|&uE,giԅXG ? C,ǔazŸ$2f~H.ge({ž: |璚ZKsBG N !yQ 4PI!+3X|XRLWBSSd5OR$ѠIYC2"BeP"P?0!j/AtJUB+F2sR.po@oLC7`tv.[Hevʽt^VW*p`ĎY@ C(9`̘.l䓊YX u_UfwLӁ'Jyie#4R~Gd"7dqWB #FsEK-^/(jDȄ!4\yZ֑3:p LYAD&CeISw;%|؜R-ifٚ\˘S[Tmh}*htأ#P/rkLѥ55G2`co1"F(-ֹeW~X7D뿛וZoß7mHkѯǨ@h!H C2FmQ' 9T`ƹfn_Rٹy_}+;Z.LD-C~"RK4G Ja,-&\F!D@ؒ㣒>ҒdFNϢ Go<+s Feș `vxPsQTgHg0@fLƍ$JHW!,Aq,Jh@' 2F 11r Z؜w]ɇU]LJ^-yCD:aCc3f2P%˟ u6?HC~ Ǜ^&wg:2u?տrsN "*`W%ݠr.ْL6nEО&AϬI-:c]93o>,1QMG"ºX ' (ڢok:`m*awK'fK/L DE3e9Y[Us0 nL=y$ܩB"_JSb{&!O>AޯuTe&D 3i%/!*:F윌b/1*tkKϫ׵!47 Ŕ "YɟH~ɳߚbRBy8J=F'j%7'HT.~!B~O׾v:?#6nZ~)9ջ Ғ@Fv׷%>?{ Ǜ|L]1ۦgxCzŖ6>E}*dHPke/wЧW9Lv fGAoteEWVsmm/=:O1#J>cy,>ęzWYC냷w4$!ڀ1L_7e(VZr"5Hq(fADMܤ c0$dRE2!'Q W޸M5&nDzh=dԦƴP\ pŭheMN+<%Ih .s gG'SЙe\D';\<.snJ;C0`xEPaۂ3E?+zw RR2w@H C ULW} ++N3(@U=艝*R"38b(/XXt[rF a̬"cBFzY&:+6ҿjAWadz3~s$# h|EΓNHfdߙif]Nۜu9$!%$)ۨ,Tu,BZ4Bٌ\! d#)ݰ7&n#^y!E7b+27Ň_#Go!1oP-Σ&mo:/vugw=jwƍU=!mR´TYcDx,zC4;ﴫՒ3Т3 h U6\"FV(=VVL}FdϽbbI҂!=RB肰.E_*cJxƘ~V4mL6]UnSV;?-kK͉IU6}t}W81!0|t<#WZs) 8\L"ҹHd;) :Υ **J})pUբpU+L(c{ >*W?3yMLrv gƉ &z?1?~Co<.4>@=JEH%~)?[(ս7qYS 3j]?wۘ OsND˖_#*#=b(muw/1sι=`@wXnxn?&_ﱮT]|2??9a4P%\a`*sYBlɼ;F}jh.pg䟡BĀtmKQKޓ&^`ZbN&c]yikZ-G ;<_kr5GٜPܚ- MxfV !|Ϊʺr Q5u3{cz4IP2B#p<[1P#S< } ϻ2i ;ˤЮm-:@UU;/VcHitWo*+/H`U_ \iO}*v<\)UWolb~1`VU;zpA "W$.RH{Hہ+˴:m^7rIQKz]IG㼯fO$|^m8dkDy!ʹ*.L 4^DbiwG5?_F_Fw{!?fֱ!7ud6KoKJ2Qt0 -uXQinx2 jOd})s5ꟿϊD1p$z_ӛ^yS=ise5;UmK.eP|K_S'#]ZM4?+J҆+z&Wa͖iB5 TtN.Ѫ-WJl2N+ >E"mp&6#Ʌs5 TnKn#]$Z$0m)ָ b#f;2:0kf*|R!D|}s:zPɁ򝶞 oQ=P=f9\ۃJ1t`xdh,&d1"Qah S>i\J95=8؀g27Or,6]Z1qhښ*K\9mX-!ƺFIIN,]Q~W"S}՗] Ӳ6Eg9>:!OO &&1h),$ :*KK1}/E),tMe7,T \ĵ"ԆhmeFhX"GX+)u٩ŢDmgkƺK1.9yef bbe“3@XcQIiIF#}gQ5ik-M{S`  @&ʐ3>[+rAdJ9JutC m&m2&|xJiN:JOʢT۸2~nП|h}Uy)$%7RbY'lVblQx&Ld߈?IQ[5u#Mv[wAѡ[-/yxIΙwfO$}B;άj[sfu,?6QHq1%37x6yo,F3ep)j#6&xoKLtѹȭ.3.z ƨhb2tEz5ON d TIߤ[[Z_W뱋BBr!%2"aMΣx>6&~}$2㇓7,pG` PNM&Db>꜌zJ 5DD!'ܸ,[#HJQڻ/nUSBBZ?Gٚ3`OH/%뼻K_(%FT~ӛS&>k)ɘQ;w47H[Ug7[{uoT !Ks溸? UɐLva+n˨_ ۠O?ٻ8vW<%z",v%كd,HOq.3#6tUb]1>*=ًog˳/s619;vPa5[>`ۓwzgnW[[y1OsߚWw 44y{wefjo}S7FZb^9/#)^~Z@KM?i{5O6oJ87S@5J!&^ȠIQ*t*d2#/̇Oa@VSLMi@'t~~l~-v8H-bYfV fa.Y}Lia^"Lf6M 쐼Y [XFǥ*>tmŌ6(Xnc_6NV{Bf|q&- 5Sg5䈉rE%tX@r6 R0=dx*9X@aȂ ES8]u%$!TR]}Ja7qvZΨ6x,"Nq[k%PѺbX]ˡm377dGд)oZ@)Qqh ` 6ivF:RIk&5[k("vgEvZUgu1:͒}ld'Yh L\vtu% JYmʔl-!0^c_{:=3\`:fmw-n ~`@яw ?ʲ(>Qse^¾yH&dy^>!}i"2BJ0b1lT5Nƥ` ZJʸY/+ӣkE=3J.^~: ;#Tbw =m9= RJt`ȥEZW_QEYySGjPBLŮJ ٸB" bĠ#4gVhg3XT-"wgߠUM{ܴcSX|ݓvϾϾVݟ:vz,x]_:,4XVlmkf܁<՘dT!!W ;I+*8VTp4x$6 ! ePޡ:F)j5Ygbʂa+bkg##8F d!:g/w^ lRoZcSș-Y.?NvLdZ/ۃ>|%Zb>R><귿ೲESA ra*4J{W&"_=#, :-SFKPkqD 4,\]-yXJ\]\b. [nBoOm?f_j4j򞊋d Bb)%5 l쫍:)o]wBVwxor>2bEK^3o'~Ja7b٬RG!،m>'ףy劭Pymk⋵,7a":{ cQrt ؘ|;>=2[eXG@\W͗>׽\jKc )ʑ! NsΆFΚ7L@Gbc7-[bjʲ|N +E[i-Zg% T;9my '߽{w߳Oo&c}5Fg#6I4OϨX#9>sBw!^k x5`;ksсeDv|sOf'fJoSe[=Ml~z Juz +v2p,\usA` 1nYip21Li/i]Iks0 sP}hPlJ(Ȥѿ4yR/ξxxۋVw} m^\hH_O<)~/XbΤ@)%Cru[Or,TW\]Hs)\ R5B/"eE 'EBջq7qvk8xZr5=fkxm8Xj#aj*ѶxGti֥/-RAEN@:hM<@)ޅ#:T840KEY-؞ /0MW'=B vxEGQY,ډɎo!4] qSe=>CL=TCTg-k+nqq~VmNӌd:EU|52یlia<;ȘrZHVᗓh߬N˜;rմ{mmcş_r_fx#InF YR'H)Z;5ؐ4! i=߱q=ތ˩\_\}_Y'rjZ(g c1"?y:ZdݱzCnqrx푓t͏64ѷZAv)c#iB[`{Z8iPADK'=,md,<ʬ?^\O.cC챀?&5NGTϨuIʩ|.O: uCo,FOWi~o";9Jj 8\aeoZ7 _fg˴0`N,ŠnnhEL7]NwyzlM2jNRu|wwɎ>_.Ju9%:£Uo8zK[Ҕ7:y-y+L0v`XV{nT\Ly`*qyCV& ɇ*Ud2p{4ǜ_|/xT%x{wg j !iVX[70ZI'5)l9= M DvuZBTr1m/{GVd˃bB [Ume.agM'&Xӣa7ve/XoAE/WY^wzxE}LJon~R[^'O9ƕ'WZarmpSLy挴9(BRlJ Q U&(%D!0Q-D]g&Ώ* 2)1)K@@SB9Rd\֐xI^9SR+kIfp&e,P1T{9n@9l5!caiJhNڧ#עmb6 rZ |XR_^ܫGnnd/7XMy3vz8ꎻ-1J94&9Y-P&Vˈa@tyHdJ-V=N%_DsQ"3sv9BpZauDc%TK(Pq Ud:.Tcj5T=Ș7yMثcC EAeh \rsʭ>Қm>$oJr=qǍHBV3 xk ֞ FjB!7x4ś&K]YUq|sg5X4H΃ɓ6&W1 F:e ˧,Y>$ +ąQLrALe' \[ Pϵae3Ldɇ\ѲBB2K@jtP1I-bqI1ĜYZmѪPhuVgbʔ}IcG#rk:F{uKg "VJkq,$#MgNښmT"/Z_c,rox8I堅1 ͸ %_0sB6’3#Bģ9X*_E:cC6,%AŰ7A4,go*J $J"!K6PƓMÐ>q?L-Xpj.|(o)edRqKiǣ>cSܙrgD}0dP2weJe61(d$gmԬ2hwF(yh_Y{BOWsAObE/iػq}!S z憐\9ߴuq82܀ z>nObQqf <~4\bKZ]֘5eJ;Hu1H%75_肫 ;Ѥ::453^{5K-C޽* })tҧikvv>"Kz?Ik3~_~iԢ:Ty̎[30B뜵%RQ @⧐VeOGXoTB)B(u)T)дK2k)U,!s .n9s'jH΍S:yO?D p÷J;geRhlIU5͆ u@𙁷KURSt%dڻdMEc]B\Q uX-]dfi&qN H:PE`"]MZ2͞XI`acXAQZX)!2>gHGT84 #g7 i1L+_xKMc #'NLPd "9J J"C%LÕ/AU"&IɜW1drBP`+bP;$&|CBP‡Yp,*CbN0`)`K̓+Mv.[*qުt]T-W˘t$`8>pqVbT1\Bg$! I&5X ֶkVn u .>N B/CR(7Ʃ$eCd ]TU2x)zʦ(<צ(<(<ۦ(rv!0 wAyE6|a3mm22/'Nb)-FVmU]Ѧ(k>m,GsϼYo\?Hk=ug!_> 4:: mQ̅iMQ!Wb[ FwD/jcnm"rhMcVkWWJ䝺Qԕb-" )JrW jJ(Y;o{?IfHp``K[&zbޯuo2 5 >=^1cH 썰ջwi|qTyJqQ5} j@*Q]Vl5=۩cL9EldkU![ 짫+sݩxi&tUVAW\ڢ ]]*v+N=NUWDuU}֘IP+B$2E6[ 5P>vUzJLH]k*Į ]]C^"helA"u{U!ȶBztETZzʠe Z =`!+CJ Dui*?V[SꪐkZ jUvUJ)\F]rE]j8]]vQWC淊_k6?-21[ހ6Rkrc6@zz9(ٽ7uG,Q7 B-̗hN=i =sf׺t**ƼM?c׼ǂ楼siAE_˻zu 3ilV\?x*xg YX!&BBe"pixSM$cЍ/wOb^d~^O> -TfRU9Hep*6Gے}UN*̕ V^yKܬ{U꺾u]ߺo]׷[u}kU7u};GotjNIxCH1dRO֣g70?' iANY`J;Hu!H%zvzi=G8j-#-j/j~^PN:tI@תօE!.@?Z{pep@0tY.=0??+qZ4`nQN9 ;+ov)sgK?R!8\jʹۙO!#(XMJgHGT84 #g7 i1Ѡ(ثvĮқBI?L#-l}C*'W#MVQr(;1A@ep$(2TJ*b q43QYpNR)MM#< CpqVbT1\Bg$! I&5$4V'~hYyP%8Ol(o@)oF=J+og3q+N<-/^ɱn nB[%%ԅ/3x8pcoo?4+%\*rsIWdbJ`dye\^Q@S hEKCP!lҒ/4pE0J'PpRP Ed@ 6JhDĤR*sdlJK>)b,"Gp;+U^u3#x:%&BLѹȭ<h5 D\x#oLHi뒎׳L5{b[}F{w>3I ѸOFMģf: U&i`obm`Z:)\-@$D+dr!%ZZPP*'Q~hF`Ə}oYf zn N ł}^u'b~!{JSI/8bʎ4=?}f)5$,/({sr}V+{%i !B>د;C N#(wwL ')e!}> Y3;"|H׺$r/6WƮ}_wr.h=ϛ~ޥOEvI7䆾KW'7[IHtkltyG$qOcۜC+]Ͽf?>p1r}kܺc}0xI{aڪ V!r[J.> >eS@uc|-z/v`rrz\k垫Y* e"mp|t~WienqY΅ ]ovȢdhφ* 7 IU9VIt!fd>A*5B*xW)a9$Є1 &C@91+AP+ Zꇥ6D'Itv͆eyDwOFoȄӓ &Vjy$@ %ٻڞH$W|"񰠪G&g1!,Hm.l\-;[}0)f@D[b𭖚89c`ܷYhFKqngݨ(jd¤+MF$U1Y&N\Ub 6OE w+ Ρurp dBFvNΥTc&GzGzPMŜ ߹3)M9_iه bqN\.w8%<*'gzng:[Z;3sfG?l;x.'biᶢ D,&K.c&+[8C%tF)2G.*TmMh*P6\ull!WyIIЯ?+FzBwF Zg i7ᩎWy`Lf@2zURT65s+ءV c8hkX1$nCQb(w'ʙb0ܤFds/O| ~#[sOs|hm]v D$|!}|bT'/䖓).?khk 4rb̚q|uƝP:?ԥIKj58oZ'*(k%6yJ@4ɐl!Crpd #exqp|zc#$(5\Y\% `aȶhT$\6X *P)-Y|* m| B[hU a'XDjbn^QbRϷ7vBuգη)C]U{uE凋 dO(ŽCkAʾl&Ɩ,ɷɟhL4+2VYĊPـ}u3 I HFnl0W醅VƶXh:cIƌ6i2~ #vhxjvU-LY m+`Ζ\6&qc-({je|nir fX=KlB։q!VŇޏ:z0bYb jw[ۢ6vFmP{bMZzEkg5䈩Tۈ?q ]j,X 9k0+|y*9X@J] yJ8M:Y F IM[U A댇yA}%q~T b-"Έh'DޮBXk4ַqd[jzyzStE@)Qbch `npF1xC5zZMһh9#C[k]uv[%n ,cLLY;\I’lvg/(A[. ,ϊ}Ip)pqg"^c[< pK'"1{n~|G*wUR[[CHrz^fJuq>,>Z;?+N~fGo3͓ө㧟[sz>{^?kY9]_,,sʈe[2ꅟ- sAǢe}?充毼hȟI0SZ@wYx8~rccy.˟:ؓj YCWO˗~Ab'zЮ`}-qX>s|6_ӏ{J@Im --ۓո6Rp|=_?^7,?*h(ѫse'GݶA<1JVLuO$bownvNr+Tuv'궳J.$-&"ID:d\ ֘ {DE%euM< CZCg%ؿ:?ڬk>?ϗ RJ|`ȥU&O(n߬#s CdbW'l\=PrF[q$MZ|$`j1N͜uĘs_37)P ]LS]Wﺕ1"϶Y2^(զgUBmtvKAHZ0a]RZB,wg'}gwyss jFs FrtӳUpj^7 mT=;PA/*.xTK֡Bc1SQׄY̘ķ(/AO`. k.*1\S}8sp[n7 JLde5}HاV+YFe""%j-0yn7n{3<>&(91gcpbH AUKx R{ U۩'.*L/|4:QWB@8\&(~͜ lToYh/jU6 sIV,IHɑBB!ʚPވ(t&oez~/@͕UI%bmg* F'Mq- ]UlRE`Ȫ)m`\̎mּ"*zjb!LSe.S72a\gu f >֜(xd6HDSY.olR/BWIG'|"O/#"Ay蝑Du>& = e%58rSڇdY |\gcsL,ނi#5'qEvK8 ;w{L~79dX20h(낄䅊M&wޔ;_.{PBl!(dlt.(:5IQrMeV3^_ֿ@B53VE8e޺Ġ8b tmǙ6c<+c: zEĂod\Pue80Bzsu(&UnL O5RRpLl@ggO6ZR49o]72kkH"!WN5oZ,:YiLV%C 8OCclsX&e Su#hP{> Zi*|%W}98<[eqG/ٷ)J㥛egPK5 [jj_rTL+O3 'bSFΚ7\Ge'Hw%{#n::#-gkI/4ۻږEJhW%}-۽\*0e6'BT[kMw]o{Spp8 \) 0Xoq77䬴p;ɪFgh}\BsUis0 sWGb:}K5"'+6-,jlEǐ5\1&VTy8'789[[E޾4q>&ۊm;Ywy|$ 7 ̒^'1_}!"Нr([*Y?|5l_=@*q: 3 "ɀ+ ]~IN4vJcM*.KkZtLJ" &Ԩ1ؠ](iaBWzЦNyv~t2?{cՇ+qB7wR>_.H' 4>q،I: SK"B[V, )%Xj!غc%r"1A"I?[R27%-N~C Un߷Ylze~Sm8`r66.\m*BLM &ps1 T,$"< "kR0'`ѕ ȚζTt prȣ᠌L((I?5iv'3#nKS.Rt#= ;OHV_چu=38O95րv8rh!%)@ N)xD6b4'<uQ虐!OlbbI\22S9kD%7kǛׄOYm }i;A׀c{}מQx 1@]ID=+-$f%(%L^&'q֛bg7=C#CMwdw6PlėzxsAw+[qB ٣樀B,#lhRقL1YOʊ[6M9 Gǟ~+5Pr ߭mNo4ENN/Su-zx8\+i*IiM7ONfs;"??KͰ%n#>n죬XniZm4)q3#Aӆߎcw˩\?{gȍ"bK${{  7MI|j=yԒlӝLYX|]OilYܬZ} ?_\N @ jv[v.MTOF^.l_j?iA r9)ʹ[JfBkJPj#rP_|^Fo/C䡲e-|OKw8OyﳎÑ'N'ջbZ#Փuϯi_}*Y_.ٺi?M7V JpZw0n[xXSNgִ>aˋwV'\:6̇<_MH%U+J.NWyo0'yW%7;_7E@=~ζsܠKkO"8q]/E\<ó^a2ُ\Քy\%]^zԢP"H~n mAC<(ayb׷~HhGXE#ҤqQUWKicToWZQii%@,Ie>m앲IClr9ib~7ΒRgK8:^VttY*8g?K4plke@r΂#Tݥ4iG(ѕTBu\ךСN'auPEcr0R()p^47G0Ly4+Lt Sn1pfx5hЄ $SR_El_%\r*.WI%a 2 6 Hչ rTrpC\i$+ HVqE*[\W i]FB>^9"H.\pEjHWRW=ĕe]`iUԚqE*pC\9nTk l"6P;+R :u\J5 \^3|pz\uN]u ınjD3T23nz[/یzneEov FlXm^c㬾 0cs{W 24 ~cQ: Q69ҤR=&`!c* HaTZpC\pEe>c`56u\J\W֐PypErW&C #9|$W\pEj;4p8H܀JYM$WlVqE*# Xr]` \`ԚqE*pC\YɜJ"P;'vEjU"!cXFB]\nsqE*倫Wc9'Huu,/q39\Ik.PoMk꺩(k}4PN`WcJꊷp@p\J;xW}(id+l%W$W\pEj OWp\IeYV  HJ޻"Z!f/{סBt W$׸\pjIuS)؀Jkx=G !ɕĮPT!G\^vP\`U6"M*:Huz+9". rr:HRz+gKz<\`YwErWW+Ti]}=;6{:qE*pC\g W(W$7 ։q*%2Wq&r xW(:[g7"AR zW+cӫso$%6>nrJvSOOZ$p:J ڻѭnhb;f՘۱s|jNvIu gE&`C.lduXњO&OVXMnNNT .`ݚSL5ԩUkC'3#(@eZ >5Z-S-JBC 39EBMlpErOZmSt0ઇ'd+X>7$Wd+R4nUq%iO`+mNǻ"RpK\) bW( HVcW`qL!AK7d+ Z`"u\J.\WFh2#\ԕW$W\pEjӟ$`j3 6J HJ~jUqЭ"#\` H&NjmBmpuM$q6m7 :ɕNM-છJTWzM/&|Rfݘ2-4vVwt6Fͺ=U̮JkLBJ2#\`dz&s:HTz+F$px'9 QJ:Hej\b]FG .\ZwE*`/q8`P)T>5Į͇qE*A !ҖیpEW$ך\pj]"b WFseUFB Z5*\ZTa0G\Yi0-":\ +Ri`qS2 6&\\ jЩT—rǕٱ -5U=:2MNsxa&L&H],ؐ((%]Sw͙QΒNNžK3\h+`l V&Y!l 7mM3||?[7}[(ymi}R\J C UgPOGMSoɵ'vR컛DfM"6`؛r v<\\*:HW=0W(X|pErOZH޻"R !$sAFBJlpEru6":뮓J\W adFB/l;Hn>&u\J0+fォ[jv\/cٜ?l嫳~v/na~q?20)+VanRʤ`,l0:pQ{fu]s|v3n4 Itz3럾oƫZrɜÊ^|Zu? !2>ۥh,;f^e3' OṼ -%\+%'Xw Vl7ּD7׳ڳЦ~zhvebAZߛbU1_\(Y'{y%B !A2x5j͕qW{̐~.MA;]J+V%g3`D+Ɓqt !FkΒYV(U!vY?mT׷sMUu즌 ե+5){oz;sa,0Wјǥm[{O^) HJ*iFX.͜oj६?.լ:=:m)>UOcT2R4*a?P)t 53Q >eFr#αWJ&ޖ*r52XG+G徲zJ:=E7n*0<*YGU,X+^UA[P lLiBOAg޲PPCŻqp:-*ȼ2}Ź(0Ikk%$*pK؁Rwj CKI GG蜹9gR^1|-@m}f},>6CWז#|=܉s9)ޕ_ծY}s N×M4f қ/?1q' 'z)Mv6avFtLgs*hq=oko~W̍ۗF؜%jc]\>+aSbԄ?اrp3}Sic׍F Lm\U(vQ!ƫ(e]8om!%SE/J-*b{4?qu|Yφ;v=1ĴcΝb^}tP,}AdɪFp酯TJƱ#I_eedF^Y~bge #Iz7xT)uݲU8"0@r#!{d H>lb 4r-xkb&f4?&QjGV'bFV>m24k_WMfiڿO.?~O) mQi tK`kV"-J[>eBX-ڍ Ji3nrBq+< .k;tttNhAltH$ ي]̟<Q,>o{wv3^J8z_Uop[FX7r8]4ۆ{O-LYgX0Vy5.-oJ;AlO(t[ZW'RdϤ}~6=Vѹ ^piW`W_Ul&l6B(|X~r$1pD+냤)@ң`֒VBWK58Ӕv\ WNs^M^x!_MpV^|KR./ W\g׻zߨb6 ]v5N[/w؛\ RTA3à 3<&dZ,=^URv7/S|<[^'Α/D"Hl`Jx CaYDfXuIxAu LϮ{0ۏl*@ZPQXXKZH"O$P2=UgE! l+,g[9c9yee^XM .^p lpᣒ>IF#}ga˫x W`  @&ʐ3>[+rAVDvSOP)VYN H{3B&L I(9+Nvˮ71R$:xk!d*-Gx'L/-L3SJϗKL/Po:xEEdPIȔb U[00Ą*YAFO5cS͋Z۔޳qó%jFۍ^vXw;gߑHQ}g>x/?T)TD6%p[IR&4SY .h YEK !ms.' RP0Aȉ\0<ۈX#AXL 3ހM)zcp[@$29YȗSsU#V#yLtѹȭ.3".z ƨhb25mp n#vݑa?#}TLBy4)%;Ӷfn:N@u+:6m[S> 17(15#6.ܸDoChJ糨|[hN`Ə}oXK7a5{o b>B6X {=T*f Nv#z+#[/D_@YRz I *Z"}8JmM€i?/R(L@QfLY1,yrF$0s':?x/P99DV=Zreά0J5COQ*z5x&y|tzW`e\30Z(?#)/qJBȟ%Oۢ1YdtAf|cmʶK;?\w[ϰࣃgF'GuqzܠF5% 7Xc 3%ɹ=j@u̙:Σh:yMEydH+䠴 sc*(N0}THSݝضEe_ !~ܾC!c&Ȼ "SGHgZɐAlbD-gVy(* If#w 巘iГh.0<5"kL=95}] ~~y u:uT:. ^rt_v`Ul?YeS6!+. !dd :R$A\q-LUͨ1f01 zR0Hrh09#!@  eRR5c5rkzX.BQYNT<d|Ӵa>Lyß-h`04Oo߸ƶ6e!$yV ":ۙY2,9υܽ#2c(Ξ O_Ë ¨:oLoZC,hS{95 ㊋y)ZwNkwveP|Ri8#oEI$x6`AF0ӗ$! yA$M&&Oɞ92@NuUևٯ[~t&~LE1^FjDYY#N#vqgh*H)8&%.dZ\XYSE}PN1-jDk1|$QJI3$IɄ@4G-\qjwXF~:ѫKuVcU;7 sEx'$I 2r|t2YEtӋЋ9clrV ʫSv%uُ~ HWW~\kuEle>v^F#ACc7F"憡&ZsɍBz]"䉺N1w;U"DHϜ22ˆA$K.q˝PH!U`QĔsLTYԨ?W#Kt|Wi&OjہJ3EtAYriypƱ %3_ඛ4ŬdBFPh#}6V$DVlu766^,Ik&,*74D0\ĴS)":eBB@FvѡȱHo 3 EpA9jc1YHgb4јЎɦv1j5r* K(aN(賂仳L1 DFn}*#q ur a!x쟻Agˮ[\ cAeU۩`x^XS9ۤ5.:W M;sbt3|~ON/ӏwh >5>ɵ:m3sAc:}=®nX 8\_fgؚ?̏f-`G{>:n2.G~[ow7qfGa=Ld1*/u @=J;h *߸(u<cxS>5^~on (8j Fp`F2U %3/IXm:ZMFcEAM~$r3a2%sdeE IxtP 94"zIY%zCn+gm.(.w^hk/7_'Eeuʒ8im]L1;x2x9>;:ɭ&KoF;9B d5*dd5tB`$_Q&.dS01ETDz ,bu[3rM2B `lRF)"+*Y5rV\Fv1* @!tYt6F#PȥSaB.O0-HtNj[>@O #66݂^ِl5I.IxqKSJї|Qc΋ˋ}y>NۻPV$Y;]-0bq ZjE])C=Xx+"rv 7yGS]Ol|mv[G$lWetm{3j'eR`Uΐ a.<~UKCv=uqrk' d5R{;:"˭6$=8o68)7~:8R 맚&%pq0liÖ]vfwɷ<(0R*GjL8c#q%QC;IdAk[92]J\MFL'uGL0qיJ\7>}t$swtlY GJAd(1K9Go+2^[krlax{=.Bi`b`!}e `첑v#:nLNuaL *MX.n!9?t'fvs)hry^LwP \;^9@HOn( 2Dm)~T:bBхaӵUUQ!xU56D֓o,r{NGܞ>Y-ҡ|I[č&l@2?'ցLn~a{{rhs?g? ,,7/jy@Tۯ78cV-wz-;4v'=AFlp˟?78 f>q}ZWW*ڽT{^?ln zDh/qF s^d 8~pFiJBPjdqJ}؂~% Lϒ.I^EtHĎk0J,JP 7VP =nM!`+%L!)) o6)XU.58  .N/V_kskIX9xICufۧ^x2|tH$n%]'1L|i-|4#~~ 8y|z~6 .1J? AO"5 6 mC$4@cƪȖN,z%UII D/Ajo>;M&JWvݪ\]̎?B|NlFFLhfz;P+gUU,lMM5̙bS%-NF;@&bl"+[-זIi3$H(>308[FEKu!xbYMy88DYIdr7GeᬎQ1Y̖T4E6(g G#ǑrH}0q7NۮNjŪ՝.7=OXnzO:&L2GS &fgU*HĜVԔ K C.[0H6g:L.*JDj [Cv'Ls$pzlcglF|gus0|EnM ;nCh괿Ӝ{csJTRFaC,AJb62FLe6A( xgz!O6l5XDEĭlXRzb#oy[eyFAFc>iza '9^'vjiT}BReaaSPz*$IH~4Â{dޤ6-842tPDƎaor$47mNuGyU8ų5ޓv?wmBPJwyC}}=,0E̷em٭{^nzAR-vdvJ-vU 7ԥN_߷2ڿխzvMI:4C:}oyq>Ռ68uC isnQʍ?}7}5Se{{+˲i>p#%oSvB?mGZкZ¤QXJ?#zM9pk۰} oFpRO`}QӺ-/|{lwJ_vuCydn̿9[cX]3i&#Tq5SzUwH2{4F\Ky; ˀvӚoyhOOyd$b:F߻/.Qj_}JI+DCE&)hDKH,ekl4.y6+͎ x#qM+$?ϸŶdz[UfY+quFBͭl;D (Y@N6(M&HFKrA%$GBY)PVH#kd.J"\HW'yܑl2R! ̓|6J'!cJ#34Tzᓧ\0K*+LQjXS *KJ*UiۆeHj5&$u3 !2 #7L:5$T 7J*BƌiRZ6cVmبe%iYg>ܽUqDG ʬ>5h.Lŕ$)DaSJ%, C()X6|z2caUFRw5"b&<5Y`\zx/(`buii}gxqq3|do/QxҚ-9uH )S![Vy \ UPCzm@1.h6y2Ze 5옡yWC3K:v7 hs^u@xchdr[y':RDڢ pd*EPALBi4@d e G쐻#1Ȉ"#+Y-"nP\1BxгXe!Ht",U@k6֜ß?pe:0M`Y5 3Ko^5RR\ ΫΏ&UX@KMw afs'D|^3MW!kC:[h:YX yJnu9N-X LGjJҸh 'kgH8? E[ƛZAKrGYuP8kL.3YX@a33F1tJf:$zA}!F`J8n7Ȓ8{q66l!& _Ko".ń@ZĔp2"fׅK0q9X DB .-Pa\I Az3!z cr(j_dU'56Ze/y' 6uSJ(QPjX>okPGߩx:@5q^fP9qpk'7=RbN\LJ{HL l0 c^=+3zL ^@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 r@ȴaAZgAJ @?@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 r@@BmaA\# d[{&tt&Ћd!FiOL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&e!Ru&qa q<&P+&Ŀ @QD b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1^FigC6xq+5mOݗחEAܩ<xphv@%`0j:%H8%wo]O`]O\k]GZ@grWrWʑzWRٕZC?.Hmk)B' }{n w&_Te/ 룋l~) _h@q+3\rX v0nġ&U^~nZ= w㮚j:J \3O+) S`UCqWMZg]h&]@w^Rḫ&=tv^wwդ\[ wrܕ= l'=P]-{ꮚܑzJqV z:^hg4MOJ>:ӦUxūO?<>ZӸ[=ˏgO+ȔMp1_CwJ,\DyoE9ous%Zf+W| ~ #?8Wheu1xI_ ^h$},r[ K+UBkF!֭%jk+)#8M]?z]owveW=˜Z9Y߫w6W/iQ(wS?c|,ªrW-du)MsI1kcV~ފyc߯.&\5_޳zwZk~sq=$u+~,rגHez>,$RV :].^ך~tZpY5ɽu%nQԜ$.ڨ)ZhMF\\#RةUѾkםm%>8M){Yt\{FyM{ tK:,w^Lj,nٝ~巹̟&g+d(W|J6IFCHOGޜ]]ܽըm}t4kk6bDйbm6 }08_w8 R%V ,̱Z [T枻QV kBAq_3Yǭd^꯽PZ3 olcuU{UX UIK:=ltI>t$o iv1hw0!B&q]XtZ{ͽ"HFBC tRǮd4^Ӄ=0&"JEe1c"S)^yI(qA{h<-tiL/ZqnZymti-jaɿw/сi-}* )yزL'*!e^S?H?-GoC|„F㰼Ӊٚ=TQG&ge$wE%eʏ?]B}=B?);<^5= kH9UOڞEwwqV*6)>>׏'i3\DY]kޅPL'4p&Vmyᚻ uWbmhJe| ۷=+gAܦ 'FT1Q|j3II΃VeW.뼲6#n'r޲`Gߧ(ir/SzBtZ۴/N^O/2@ċrqq N-.B.?J.s>vJ)UR]; ?w!Ju9%ϱA63q2-ӒC9V76GNr>yEhXuxG=ҧJC7pfZ\jXW5鼺? _;u0+/bRBb:/ZY*gUݼiol:J&楯 ]ή}줬ݔ}eOgWM+ǹWǟpʄ8]үhnX:qk}mo2]G^ͻ6rF}q58vrh LOaҚCQ?:xߩ pun~1^]sN(n~@\nmzo;vv}LtZwy8ݖWn}~]7g2\sH֧O?^W{=m+\>]7ntׯٮ=̀%]zSS*!ePټx`N:?f+{ 6-C[o[_Yi3+Οjmnb<0gz7$Jι‘n9Rg3o'IVYyjMO@|->ۺD\{Lq,-RoE H/7o,-_)haKCq>YXM-j1ۼ5 >X^j.m EtN?8{T&p[Be/zN\v !Zdp%ѧz|'9pus zTiSmRgJayxb<U ߭Nr`s*N1;e9S*+#+ߋgUT')\Yt[n X5hHڢ1zآR5o%H!IJfUhT;t,-~ k,U \008w6xޛwF~4e}ڵm;qZ!Vȡsy||&وQCpoλÝX@qEN,!t`r\d&IksXB/u[+g:$S$!RToBTLzeւBc2gVs&um *pj,T2m\>Pݞ,هpo=b WsϥPSdV;NX >Ư̆ ,Ttu׼-0+!SZ:8wjК wϟUgq0=y:*y<j=(e8WbrS5յM=~5Ⲕ7:ME^373gk5Ft.W\OI.9G!UIeÍ k*Оq0qa|`P_(|keCǷ4p2/4;n]8=t:R^;XduY<c 6ɞ`P%${2 濟bKeZȔ-{:$}*2LQ+5"T%ۙWY1,9υp>{+a(d#gmx1,^h&crS:C,[UG-q~:TvqՖv`wBNK^i5g֤;c3RF+XU{U !2I,:dĂ[sQT" Vy~zfQL0E>ETE-ZtPJѫɹ9 Ec|PBQ3#ZDg1m$q㤤 abgH%!# 4F`qjgX&yd묶J]Խ]Nnt*6'5t@BIk}&- *Y6DcœaεVǡV&V9 EUvяh5IZ&q/rv뉖Z.RZݗZK-ikkw\UI '1ثRƎ/um-idܠqB<^>nHINimɕ"$Ij,I33ah\aIgUUb02 b&e!r朴A H 9Ae9@RYߠU{\cS>F$ޜ\wdۜ\7Qbkw9Kgۏgr)Ǐ]Bے@$Ey隝ٙa*wi_GtJ: 1f`ԁiDmCIT 1:)r [ DdA{1H1Gc:a BֆQWk/~b%H!+e(JD,#}iBdDS)ig+1qa̭aPiN;(JX:-;,/ Zr \c.C H{W Q-s9Uhj *^:ԷupgrY!) *VrdR21X>$^~uQpiT$}{S(dËH)R4Ysԍ`tۨk[^E;w9^^6$ Wgd_dCO*tQN٠7#X 녶ʦUnb*3γBc,%fd"V3 C*#jܯj4׳3DNTֳjg^~JjOP;ߵPxԅ0拳.G H+ԉ$I0b5 a*PS__=3RKnALՔHwOYfiN8T抦9@PZՠVX{fƭHY!Ku*$+:+s$QYq+{JZ ;ZhoB3nl)1J`2@!9-\0 + Cգz܀ӪQkvo z5/kqwB*jҴMu^/?%e*/eR}W :*'Y 76 dnd`ʻ&sU2snlOt&:.Ѧa=Fje0\Xoo.)VkFDAR\ ѺQJtwQD:3a,V9kԺ S?9q^>O-ck}$hL'"-xsmA/ ߛ_g]]X^jdAS2' MalcO g@,)0²pw( lt\ZeHjp_:敗ap+R6r6^ cP"qН.BljqlW#*mԐD`i 8 eJP{Rg@czbH夓rt,G!` 撉pg$!}f8kV hl:]Xk/Vz xvN+zfLRPUQ.[b(q})UIZ^EHM_El4dU-%.>Lr҄5el徱F*D{ב^^^a{zr1泮1J-m=0d(EeBy`1z&vg*^w7_)+kS"sZu;n/0xrZ(/y@#`!d! 4>f bobZ>TŽa RZ^(;d.4*H?[V}&H3Ȃ9:`81KWzr='$He9}_8PllDtYiXãM"QpchN3$@ "IQ9% @_h(myM \-p)'rM#혌(~Lr  imaa&xH_&x8լwazQ;[ǥ_~۟ײ ?_G>7xX}0 G2q|n&b7 ??nM3 ST>4hoF0'0Eu6|C/ѡʐ⺜ ) նr"?[NoqTOd/{GRej2Z̏,:cP8??+\Mytv/Y$2!պUZ=o=)ǂebm5;O;nvnvk~Q\=z7K#,yg}E"Rӑ!*KkmcIm%oHY ~࢟ Ԓe`{FȦ4CpjNꪮy'='?_M7jY& ;U'\*聣T*=z@ZSWx+#/9W9s2o MZ"`*ՙ8^e 8]L!:TUT*c^e/*m2JPEST$1!)Q i-I)&u]|Fy4P-4* RN[Bd4ǔ|n<RvZlal&7MbWsQ]LomEHrI#t|; rIz{Ƿh$jr4Be[&ΎeSRc)% 4KЇ2F#Lʒe. 2}}R$C'xۦ;$x5 BRc*$6>0]`L'ϽẬHQ"5\QWh'@&C$>G F&-+>޹O._9Ov<8$Y2rq^dJȨV1TY9D*C%hf$0(h,R'  ڨh6٨hg}r s@5qv(mA{MnLzc.zf&Zi\f!xw]־SN=;zv@^Qt#|Ũ+v" wFJ ftQԓ' h$pO=Iڛ$AQ=ʼ^\~K İ6_[<ȱ+wv`[/ipalݧp\?.h0xSHq0ؼ_rsxUͯnԒVdj=;ڎ;  Xb>X$MyVs[-fjmj>P[4AJ{JF+` <ȷhyԳ]<}'iuҪճemeYϫ]Oprbk}1-C{7aS^҄p$/*PĊ#bQOЌ{oo)־=n?f|akvt4F-s)x;I}p +r%0);EvH@ɚ>DΚymC p6P~Z.YʄDŽSG_p)<.N]]j<z-?-0 rysƧ̿lY|x{~9Z`9Pˣ0^3s= AXFmj/2}*Iq}|:".O [y%O,US"d]"2=\;HZ{t;qԍ[q瀷E%oG#9l8n0GL׷CTU oB׈pus-? ~'xaHAUq:4\ +u'^P=0eMAnXPi0l, &ONz3*>(}v9pCIw@DGAl ws< W,s##A&%^zK2^'s((]Y#KNAd'ͶC4VQkq55}$gms="*JשͅEW_V@a;pP8b/ ~Oxaqk$aP3؅1)tnIo"x륅 m^&2o2)҂}҂zD>a  O \i:\ٚ~@Rn^$~WE\s2pҚcgWEJ޳"6'WE\+ObuBp(9!B8vUĕTH; )-+)Y[c9qiQK<l|fZotH8Gj#`'&rK@({ޫrQ{_ϥm(s.c״ka՜EQo67H- l4\0߅e9 WG2yt:XN)7_nX~_DЋ6]QAǷrCTϼnPZZ#.*{*5w)mpX hچNj ( 6j1k¦lA|g$Z &+po>FS9ydʡ|kMzcGj*,{qA:M9];B-XV1DL';*Uݯ`D&G!eeUf*'gDb&Rq_8;v '8|ޡ /?}sui6~Z晷z.d5鵭|{3s1'оK w! Ncx+ֵ Եrs5NSkKWRMtOOt@+Ocsu4 hwZqQwf@J߮!$୮*tL֓ N'RU@xu;]URs֡ީTpac j.yBIŔ}5*BHr=iJrko/zZTZϋ>&=|nM~s$.[n^D @Dr—cAU֩Sr))A0Ryo@b<L\ gWsUuVgǘȳ%jґ0;N?^.g_4TJ\ 1X2&4.@lm T[N6$%+?UBFP\55vZl7t>i .#-E/Gn͡aOOy=BвzW"Y냷w0$x6@`NIU0]F2@!H&&OܠȞ#FMi@:i*a5qa/ 0vǾ(+#GgjDcTAJ!))ui}):粌QP^ڧ̊U)EUDQ4o$fLDܤ c0(dRLdB@O8"P{)YW:iɾ+qg}#Ȅ79yC$g`!LAgupqϹv싇24wMOg麙#DSiMIp}e?jneNMWL"|-Rײk(%gՓjvp.'wj-Ƿ1SteW'Kݎ]௭}K=?z엃]v,]Y}7*,K1M$d Xʔ-rl<6t=dB? /i7z9L*> BZ`jS;~ 6Ɨ`^Ei:F-hOSh@p.gm,+1^M j|?{}Ђ1qq'uW*U}07<yF/]l}^"8r~12 dpUd:_"Sm̎eדC_2>p͜%EϳFSABuC;q)I୙LIVe湐Y>P8e%)*kvj=wPav}&=fb$,Ui[DK^;u8֗x&&$1=_}i^ 8U-<0qӃϣKw7[(8j8*PP65ɞ44VXTƒO"Mk 31pD(z -# W&h` G~EЍ$ؾ\~]%vڶ7}!w;S#_1P8K"PP{4g|T!iN*I[*;Uقe2{K3,˘޵q$;`'~W0\d0QmIuW=|X/25(y 5Ӝ鮪UM=|ljo&)L ᜔&Y~B([YO,M%ӹG+_ ySh62Vo][dYJ~zw/LxF|y ;R)Ύ;\yCٯiPWUdjL^ ac r9$7JE|ݔ t@Z>ƣxs39b=f Z^t5ONWQ73Qjބ$, ۜJL قA^#D#T^^qzKmJV hRVAz)X]V9 rh8 nX~-&1^\3[ >g$Mܖߪ;x͗YBmV7xl YXGsrPKΨdeUC$H )Dk O7g:[ͽ٪zz4O2d5P( AdIQߑ (S)j<]ueWs)U<tNqG%t-iWl' HF'>RߙrBEX/B-2{[p`hQFq7kV⩃7}`<&( YYTz*xշ*%) Fj^eZn]>θo 2x!gp܎8Fߓ !6+o*:8^?J|=>K$%>q5cfBN! 6\8zbf<m$Y}HJLϪ0O /p:\E\5Zul12\k[|ߦ?bg?JG~6L>~f^ءV;_QJF3_6>TA %'3&<͇iLyUuŇ[R>|N#Igyɓq},x; UG_5$(?izG59hAoUR`~,FY;´wl9/~3l}dqP]}xat6Mx[ gYْq nc2615HJ!`ګ(lzv<-ƱRbo|"xF~Nv}sZ.үʹ`ueU˫]m#{k,~7SC)GyabUnSa{n8.'ch_j;'1'7o<εhA >iOF%b0Aj+V)2k\`? H\_9|*h2Z|g†`Q]Ps&e/3ʚBV{׉ҫʛfNcioۈEt܈5:ąGqj%Cfd_'P2;xvDqהlC>Ħ@-l"(.lO>\²ju {wЄe鏧^lϞj,%Y/Th$YeΘP]bA@!Um^2hIdҎ'qcQ)C646dns 6= }WuQjxsᙟOfY$o- [:Q]եoIವ6G^qp" N`SM:XLtvJ ZI_qG4Q$6֐lT) AdǦW!2sƌLrv, Үp30IZiC)GhTfJсPd`۞T13ǰm[/, [#H [lClg)z_(I eOα .Jl{ q? Ma3MAv\cwmíK6C뎻=;1WsMnP#k-ł.h ٸF˜Зh4v-}95矜-v& 11ZKL8Y5)o]5zo,:޸նDH=A:%4hwFnUu1CsQaʷ)L;XAGdh l/T7rs8*&HYLp3 Bq.D[<3IR(/zLr^gى"P|ƳYD=H `ʆrHhE%ACމ%('32 .J@-D}=(kZS Ck#1E^u,gsK9kc3+o`Ԛv$u\ϼyDTYP` 4i!3OS,:N/]OWz1gG[Bv)G*@p`tlCixYEJg!jP3NGf HTdRz&*-6 1Ij@1FҨuzщ8x^fF*p,")e 0뉹-* dlLp9 <"Lٌ =?E>ձVn!A3'襰FX= K$^LL<3*^< kñHGgt莥CNгyXSjc\i!(J $j: 6y`nMVU^||]#NԯAi:uJԲ Ye|I >0Tn:p'YoF_eݍ\T8?~]5Ԯ;$ލJ?Hт墥̂b]kVڝ9φhgYwS#(/RhNBWC/l3-'y3-Yps0QuJ}zP-,/>\q|f4Y`P볃4fq6̗w7|AZUl+ %nj6|5`n|uCl;`KjtzrrHb)&*+oM _!O ^~=w'뾅FD*'@RqAE@5SȬБ `ଌ+tRu⬦S)&`кF9*ӧ<\j@^a1 ;?*}VW˵(܀o{wøcPp0&› O(7 Ck: -d/;s qqq:.|.Whvp6i; /eͧ^mK3ѵtqnPr]41(De 5lL:Hd´*tr׽ڤr]^ܟҧ8:>ҠK ޿`wLKIn ;mK\_&N'[ނUگq3_'m5X+[~gW\ܼ9׹dumͶGZ iYms2:]T7aBF'm+;tb`%C,M Q6P{ kk唒5>!Jm ~dg`$o#dc tȤR)2cj%Kljm$t5ftꌜ.Z 4H7>w4:@iGnZo-~A;SSŸfu-;*C"7@>48ׄlyWD_&-C \錯%B $6HDVْP&Cݱ,QVm$pU"e%!3B})3\Rt :#g7Kt௻6"WɖeX7 k%1(8OR  ^[ef )y0]9Ӷ}9NesΒtΨCZO ] $?{Vn]&rx[hmK`-Ėl;'` L̥}4ISj ٽ H@sS'i:UY*Du7_!4H$99 =e5C(O[-E]ܯwluԯgVqALJg_0 bYnyL2ޓ1]^v4+b&tHzm,yFnpa}OkޅI!Rӏnxzнt4p`CwX]jN45ݘ7Yd ʹC-a:Oۙ; %rVOnKsjzojAMMv+UNM`-HuEF]Ur%uUUUt-+xQWD01ըJY;MKWWJ':uՕtRkBW`'F]ȯF]UjtuUDTWxElը+tss-RťJN]Au9pMlըJעZGW@N]ue.te8r)F]UruEԂJ\oQ]YnrW`|U%z\Z#/]]U*TW8"uU juUuXN]]qHϮ"X]]G=}/jQWQyivC]Aztsпky0sVkBeAzGjDp Yտo10h)Me^Qܛ|FYhKA.3ݩj_aH h9Lwy/%6fU@,c7链9O9|zľ-WVy`0ܱcE("`=}6ľ[7ǰe`j2h4p& T@4ǰPKKb5mXqi9vݶ-% @J5 Fx!1gh*;^üJ8waDH2PXڶ5>.U@[Qmʞޕ^tV;j^N%6ٔnp׶뭶5 _+a=.Xkv۹hYo`sYe%DUZ[%rVXS/NJ`|pj]Uw2vI2#g.DA )W!Jl^¹V%! WH&5w qy|Oвv[?Y2r_:.~) 0FX2gV謩/'AZ*D 8@ܾ8 p<Ӭ. gӆ ]((G_P@Ƭj|HZDG.Y:? _^||A$B uy@)B'$"7ZDZ{.ZwtAZl„o\Rk#_ :FhMcAD9B!X‘yAtL[5Fz] 롁i.j.rӁ+5Q,HG% t1:m % Cʨ/;@GGkU!_'՟䫪OnY}7}?jJLj1.#iыHg~<;4T!j>޴pS%R x2f⊔.$4!i.j|AwV. ZO^W@G@Hj *['U,H"H+KbD iU6Xp9 C>D_)2@$%x'=r[ƈcIy\VM $tpE".#JoqhS4M |`GF캡T؟8 ?A?q8&<Щճ&MdT-5w=}}=MifS]ֺPafh2Sa>Ӯ ?ֽByױ63a8{2 |2 ~* Ʉ\؇ASqV6| (}εxt| aZ,≬Rƺׯn@Gٚ!i-pk꼿ϩ4g8ó}bYM-I={2faT}p,<65A&y}u{7uYÄ Y=!W%%ׄ.T֎S/ G:{og"766&,jlɒ K'hFl:z8-.lOF2fܺw3? >3/p|w7ּ ϓB֧2,tZka/VaχzzyӘ׌vpX$Y }@$2)ڨ^gj{ȑ_ǻ=a7ffw3{edEHr^f~EIvl-2eNc[-v7T׋ Q$`s•;;>/u/j.L&'#(vlq(!:10( 0+vuт9K))Jږة| $1¦!> lam'\3zSR."Y׺KrlGx:+PP1j4:H&^hEΑ66))z fl2O ) xHi@V YE'-c edMM"alå][fc͏]Q7FD=  k%s^Vu֖RXEBJxdmU2aUSD0Yg |1Y&gAfVe+iV{1Num6$󭎌@Cu6%mf7j#]k]Q2%hx 9Kb1)lŦ@ŃQέfǮx.{(17k?Q6*w`Ӓo}"AcS&V::vK+WFTC z+W >@(q I9`YA2(VYExDzo}qx8-QZC-!(Z S "x\*2F+TiMl g;5Ї'vקrm(3jc?ΎOOLxZڃ~K>Dz9żcF)0 4*:BZ-{M|QvbN|f: @DPBDm kHY1kp&zp6q2&N_jֻ]B\C?_d\т >~ZJ 5 )LC]6kȃ615{{!h}6o=^<7*V5v7OoBM߰Ef]_ZGuNGĒPޙstX,tt+RbˍB\XV]lqI6棏|\hwb鏀8EzzSMʵ j!T rR:Qr' )B 2t܋"^{gU^vF1{({[ӋUKumU^'DGӺ I.[g%# ?Fy}ޛ̾[7MMO?N{o;na_$^"WHei.ܩ]W!^N>~.; nmoYp Wޫ/Unw~zOC0ͩt*p"df Ȝ!1+3t1 i=% dnB&4dM]ѩ? VD6DQJH_%HOjѽ!M(#a@M1 g*Se[ gOp/'xvr>7Z_ټQmᵢj c;>LfLjN``\>_R<6BuM:\gK!- um(ah#=RJ=!e^?[xìv}uu_GL~je-C TMh|IP=CU[0TTD^mK2܁$}P# (@9vҸdud>ѿ4yZSfA|{˓ַ֥ ]o^ 狓>wJC"OK̩@VIs#yR !֞E(e+J6YZ#ڔb&{}N&bqIX+N!Eb݂͆oY/V\۞} MnRmbp=#PJ--ME _BNΑ@fbܓ#H񀰽B&yx}t +` D5%89DIExƹJE%ݼݼ\tH}LqsȜb ~Ee7]qut2Z!p>KyŚӈ`D "Թ0 %I(9u"pa0*gUPdBX96dd6ɮHq@ӓ1[gaL>l7q=\OBca9=#[Gc3c%]!rȲd#+ )-dRpY!8KڣWi< =4I::\tE1eI 0Yc8k#P86yZ 0FnSC<9Gvm6$gZ%"GX;`Gof_ujcw-C%Cu/RZٍcR Cn^/| rI FFkb%o`uR2Gmk]D;uou3sPmSmZߓ,G9>kM}h cMUSUQҲ*jLՌ!lR~C? ^Z~\d1 3!CZ!-Px/5h_Ύ֑?Vd8/?pOg~<jYk;䏄88z>eJolyq3=RѠEtJ!*]5P(ID;ϙdx~f~C{;P7^|sta5N՜_炭c:\G[{^!Fo4џ.Y Z:g7(gSt1r>[^?5gϼly92}=ovm\|T|tv4*3LMøaf0oݡ VNdp٫)_$doiˆK|"ņ+3)^"xFާzaec~}S7׸һhIe/"\/2@ʷ}Av ϧg#򼰤XgWTnJ[SpΧ|c((sa"O<aDk{=֣nc^ggz16f-wj8^@뽶O-}nƻ>b4q!r>S!=wzq Z8NЮu#;xTH6f\gC :H42J!( U%Q& ^~~`I$5z* ;RO;&lC:dJoC0D(K,/ ơxЫ30ngD^~_Ozq͓.IZ gqĮZEB "SBz/aPrk²SjgϷg;SI߉Thr%H7vdr$(_-^&-M)01)YGc|\pQ) ee.P pAx/qe'Zw}_G2N/.糔7_W#o?|ty;eҹhedw\X p,l0&㒳H ^;6@::BpQ$U ,!2,rBPp(#N؀Mci6w4!C@CDSLZ xQV!s*7K XXVoC~FsJrQKSLINI%'䳵l,bQ q?KnaM [)/ˡ.a;fX6g8#u,<9f S,<$'VtF~)uN)F@c}n"Q4f|(NP%hjVY @CR1ZZf6qR oD&B RDo:+2>5NTZ9P4M t˳Wk;:!?n/iL g)x0`S^_J͔NlKۺ!m׫"M^8_g&r󩎝on{$HS(ܕ=,l5t(Y-=sC0gxGgrw`ށC^sǚ%7.K3^ $x~A¸LM"J̕=78bs oY.6 pQb/yv}duIU1-' 6>ϧ_紂&WGy~ZVēTZ];Bؐ_TʌPEF?sX}jdnq}PHp𷂈IE=6]񂒻߷|'n!Vu0vPIgU|m||}Ctz5_o;D_M2sVѵvMO/#=-qyǰ+ W,׺Ppj;HX#+Q]`jWU:1jJI' F`j}AuppW$>BX &b+RjPq5\yޅ4w& W$W?T[G"NWA8w 6=\ F㪕` 玮ZT{uq}Uq^]*?$z˼z&bHV? _㿾~uF I1~hޓ y!J{ Ne-aMo<MOՊE]"YN g?]O}P(4+>׭*XJ,&j:ZD/}H 6gil%0+Rj?bv+t䢂PpEjt}#+4Z )"N`prA+VIJJ#+4B WhppEr5PpEj}t`G\ Wڠr! I0>+\\+Req*kF\ WX!B +kL(b+R)8w5D\YQ`@bg2Nf՚OڮWېp,GHXf0jmq*koF\ W*s >{vrÙV {\JqJ5lzu)Pxanz3UE:!\FI$%U6E[#JHob^8o@.&f?/.QDaY-g 咲6i|r.6C֢0/n6F*~g|T*_T)nF]$Z2]._K/%AyV,M Q<&rb9.<`Ojd-A#0gJ;+ 1> #є,ei?;"j1Or:btS<62F&,m.s$Sٱ9O`nxhe]t"Bbc&|Sǩ!R[ShdxduX=:~v1٢֚GMyB+֝{j\'ΜZe3NًbjAS 6tBkHpprц+Vk"$aqN9! gONS{e1T:3jRjsR`pEr"Tj;jB/AB@b=w&vjm+VJ{dn%X#+k0\Z}4R ؃tD?9-]B0sW{fU;jrB( FPb0sWWҍq2T; 6L\̺+Rkeq*a\w5H\Y!k"g$jsZ̫ړl /fq~Z*r'2׷ۏ^^,fќ&OcJc'qg(Hm^^Dq>lBw~7[Oy;]mߗo2ټXbU+qiRFBJW[{"V$*}|פSsڹbMU,挅CTн߷^UW^&QQLȴxǂc5-jaaE;k.:zg҂b|T-2ᅬ^l]#1NUp'fV3FF7fMU5;6u6^r[bҜ; ]Ѷۼ)t%k[got} l-o ݭx>U$idufj~iѦ<"Xv咿ڴ_m7]uν]Oa_%˧M6슩5U}tkͶ,湯EF}MƻI_p|jU !^\n~`#&]Xl,Vk,Rxz}[RP]:*ocŶ,1KOX&+&_w[;:ISWxuJ8ﺏ1A~&ȗ* _k׺͏[`J sIF$8RDr8±-T @~λ+w!YcC @z0J >7FbR}؞cSoRq*?ot鰺v`Vg^b=&;?qݼc1?^ a:ŒvS[۩3N%laxӉΣ޻D@G 'N/QxߙE5H@_gUW MRw {N=֦v*kbCl=ozkn jd ;r"Fwh^nV[Ӓ?,WP0j;IE1=@LE+lm0bJ$\Z *Y%Wĕʢ W,؋`pErWb #+*XU`p[WR!Jk v2\\BAV +!R+|s HD=j W$B8S,Yjm J/G\ W eC FT;u"\Z-Lq*a":$ߵw+:5b/WaӛKvbn{&N"-mn'؝;l'ןyFZN% Ӧ͈c^Z=8`BUbZ X0`pErшPpjϕJF\ Wiu@"  X}ct5D\s֚pEB0b>Zg+Rň!J{%+m0b&:w\JG\ Wƣu!?FXPpEj`U*;jT; g|N &"N T`pB W,؅+D0sWz]q+۰|qN3Q+}Zg;pv*۶qutӣ]VsV@iY.ѿl3?l  jeޛ^TdM6cjZ51ǣZ5'g9\MT#!&h&.4th܁ڼRXi|@S $`X.PX1}Z`nZH W$ W,W3jϕ5J#+%W,غ`pr=+R{U|T*1jBi W$XB8b: }\OT!JB+|U|J`"WҘWĕ*>l=wVjtq*8",q)X g0r W^\ߵS ~q  -d`,d`b-#+/v-+ }W' 2 {h럯9i'+k5OߦH] ]&Tj_2dovF//~(oA6bV\QcEE{z0E?UWeU??ͫй.O2-dC!)Wb JfzX.ZƠvD >ʫ,lyӡs۟hɃok.}ۜ쾝.Ym;C@8j#vj̞ =?!WT:#=zm&ϬHA.xt-x)bMb/@W,Q + i-sHI=%BҸDL& S΋g[}鿋$}Y'jW:UI6,|b*5bOyn/e"3!,MpejI7ZD+ȭSSjPjeju]ZitO*IHr &iC+(]K,M twUԠZ+8ӠA\LhR?{ײG$NRxhaf>ӗ&DYi׼HQ\hRdI˥bUEydXdC@17I%ۻlPEVL3RDK &XP0JHkfuT.ڳ;!JPA/u: qJi0+_CPqj{QQRA}k>bKRtDRH JDA e͒`3d5>! Db$ۃB6rZQ8b!z_4y^ȠΌ>@+^ܡE{׋qJb6(!9Yh>3(RAUvNZ'$̿2[L0~5?3s-x bcgG]`100!ƻAy@r *}tl*MT$];-6*d`fPDdvi./zLA.nn8c`3 =k(طkgUCŨծ[s5|5ǤͨyHY#h4v˨ ƴ[yFoG%з^TfMj:j7LJȀ=a]˩4dsC?tSKC!O}5^l7j)&*Néѧ i5HCLpÂ?7=׍pūI WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\Y W(w ϼኟȣWcZopyp!p%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbճdJ7PBS:xP{n4\bÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WdOU|y%(B|e^p^pZz'\2tX* Wp%2\4#2\=spr:3tp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕpkAC0ۼwrw.~p[hw]6 ^~ٯF|a=0p:x?쫡O!}Ro9 |R6'=^6e"+ A>(w쩲S/r{Vo?&'6:j{JDnv mE4Y4&Bӌi7a߳F MM߭f=>l>b:]eBWHWEiEtdVCW -bBWGHW*օk+WCWLZ ]1Zt(c:Br޹VDWzA BW-F)tut}| s ]1\+Ftb ]!]NXi5tpjA:]W* j[nXM1hS8tʨRfUO`~=nXXF{cٿ#>(}Tgj7|op ~xNQ7Ml./xoxtoSӘ{21OwF'>qDUaIFdl[]_z}7?[K>16Vc$Oi^}8OQ9KLGQBj֖rSmL|M eded& TV۔hϽ-9;`Kh}j'V4!YhwW_Z߸ SlyPҁ=,-|O++&VCW ڵuQ$tuteb /`9X+Zц+FҕM&5++f5 T:Bm ++&b|O+FԡJʥV΀#n k+:]1J2#pXbm v(-_ ]]EHiH]Sz-thQ,#]%~Ʋ_]1WCW\_~]1ZeBWqCՉR,צ}k+M'\z]{DK]ixyw v󛉱SR֜)9:vkDjE4̀SX Mv-4hm8tf M!M++~@˞p_'tt(lKxy+S3W#Zg$c+2:51VCW 7{WƑov{@>6SbЯar(%"/"k$ mŞi6kbbv$pUXׁ+%X{ApE&'kI+XcWEJ.zJq bWE`k.H\RHpUo  p9"WEڗN<]}pe H`qAfE\y1"|DǮE2$*ku1pUĵCIZ])e]}=pG=c,=O| r̉Os̾v"`l0JN^c/lFYqQ-Vv𨳓n~o7.=EZ8Mʵpņb=N' 7(Ibp[4 O oP\?}Z+86fǢ]Zh=7솃/)-a YE1tGp1h[\Һ }U7a:7)$PJ!S ̠3à 3 1$I)@M°rpG_sG:U1=\äzw"(R:^Q>iRY䷒O+3x!dR1qi5p壶e7Y{7(EFnM76tRnfKKhormGG#/,>߳\.ou+~pU?O'" T4ЛSnm5;+Trpnz=1,={tV]~Xt AgXTH)&%K%("K Ll|MB5yOf33.UfkRp ,`dɑ!$%d~;!ܩ()4Js߯fp놱r +4c^XM .^p ,14%YgW7Qc +s Feș `YD2ZŃBZezCzCjBi'hM:`LI(9W!VzRt4 C{Ue=GLxoLmLQ4U|̴ z@w䞷Zdg}F҉8"v2Lb:YbCu†`!- d`bB,':'T&MɄ#lz:~+Ӛ=`;kyjb- w7œ|w-&9%{l>8ѩм"OR;c6Lu~kL8 OSZdKoOW#[&K4 f߭H(,p:&up/5 X]: HAM2{~oѓ-j>Ճ 'ͮT۶]~f*?P3RF*;B}t "*`WKlI& L4A߇?1MxNιٛ82MpdrVAsV$Z mwrVgɲM r f=GKP&dDG?T6LD3 6"# _L 3ހM)zcp[@$2 "Gp;+q9=W-GZhzO4*)#4,'RF2MC{M.u!P4Ĥ\|OHX_TXs 9ލ6|۟cl_ș:}Zy]#58SuM\,MN /Tx틟pf󪞘# 8vgoP\:G8HaqIK1K)!xD#V@?R<:R4;"⧑+w?mzjvhTg]+>=8M?7Ѡ"5vUѾfKo\/1x{p7?׳tWppHrJnzwŽ+Z=&m(l5qɐ'ضv_duE/y:]9oZVԲZh{uGr>L=-?>1:O|\=~^s DCo8{+\>Rj~)T0Mzi ɅTv=)nN'wlm%w[CaHDڀeRKzy;f-Py{MTCo{|ۺDOG}`v{u?jn~nJWSB5awbirBa/$ۈyBar2ceUnwȓ3"LfZsҼZ`Pn,[UХB6]wtl5=x/P99Dz\)YakeDT68Hrq zyte/${XI.!itQ} HY;7,U*ZV:VW_Ers+ꫴ;thg!DAl1Dh\Jϭՠ.ys$JvW )pAX"RzAe QcBkõLd޲ؚ8=EBj;Ӟ STWV5L>eIPDj=]7r1Qi/ l0oV[a^/(^xͲspæ~~!<|-dfKC%s*r㙪@u̙'ݓo$Sdi$y\*!Y2(ιS*J1&"3٧~)K/~]b玔pv9t9yO!c)yC/cȔ%2F2d`PFs61jB-gVyPdن$s99K(Ds!1Z3Z69lcQՔX4'+x mMOʅnz%υ J[EûTҜ-!lJ77e!$-WL2A v)fo?*2*s{VS,`Dd09#yRQ@Rm#ckF|ְ58 EX{,|Q,cv_<"o\ٰ *7W֦,Ґ!*2)*\9pYDe|BX=7jDFz ș5(W5Qgv\vBrHm̶%aQ\kK;EmlGރݧAY89^YJ>zWeM@냷wрdb(/XXt[rF a̬"cBV=~c{CU#<4ɧ5[A'\ =|c9 vB`#wfY6`]Ię1hBQ"DάE-` Ip%'=HL1얍5qZc.ֱSFN;=iP-QE7~ºYzopWBNmkqfӨ{Mwb6W&r[tQh<Ђ={sڰglCiz\;x]|l]D&dhd2;19993F_ A,e6\-C]Yx憫K~?&}̖\vo&B}$?Y&kWrtr) > b^.?=^8#i/C[1%RH˙DSrcA>Jo`;gwo^&O>cn{o vPg ٺK]2(s!^% '9fCɝ3Z攤E+,x-CNkC $]NYzȉe19:qge#,#n 2C#O[ gJ8B.:"`,`'xI~3"L I}-5%JæLMwU W}vT<tFqhk[񞘜ZjH'|Pjj ^ՃV. ]/ͤԱ a=-p`: Fz-}v'^jī.tͧM^-//8gC·'Ejx޴ S=Fތ`™R*0 dSaU(`YN%OV '阌jRE[9-NNFi- V9,{}8',FY7?´Ol9?0>:2'[P2;xt l|۔l0^2[$ \!C甡=%{ 0u 7o NaѲ={՜ݰkdRtF$\$pCM *;j9!rҎ%x<)4p{С JlR[ZB^yj]E9;fnXr#Xw}_)EF |z2y}}QWm-rխwʤsJ8" 'ܢ5.ZF #6VD$S]LJd*x8'C!RrɹM$7#ka688֔C)*V x^>Kt"qs*$9²a! z>bPya|Lt5uJZtޗ |dW( P q?LpR0CqS6c)I;Ȗn>>tw<ꎻ`3t+3csP*UWKz笄'z wU2B|dm<{翝9>ގ`N_k,߂\M ~]gyArQf"S!GZ:9sl|>u'åoV.:ϣu}G&}2v:]&Kϻ׶_\..y3,[ҹ7 醕;}Pg:X|:kFeD]V;dTCz?*Y R{"܄=n@8HvCllZ[L5GIRYp`EL`e St Y;ҧ8M$LJQd) ^8#墘ZHsdu9bqI1NY3D&3SqϹM ^K пغ3s3r7=$n⽡eT->3[8m/tmls^^~(~WRs)FkZ8EM%D$z#'&1p#@b%\YfS@A+""|;6$_Ig r=( sE%9TJOqY8]#TY@z鲵+#iZ3&dȫr֌嬓Inz ~*!4_'T)! "BdH0b"VpZtzh) z0YB4vH41\pܐɆ*RӴpO@CV3N#Ne͝+EX.)):Ȣr.JmkRB{DqTP38[@NUCPDՌ+hx}&n:H)Ѧ$/m'jN*MPTTc[  :x)2%Wvb o6i!foV ɘQ1N<6`CfrSߢd^ֈ㧫R>_ވ.a%j?/~\$j: 8kk=IyzQnןA@J.T9n%Az~^%!77̸Q̝ڷ%EB0 Љg) V^|2A2`-VpVZIT$!8)9T5,2&QΧնx:]7黫ƳWh.E3Z\-Bb1Ftd/] 6k3:z{ Km:cb#%wWqV/.Θ=7^YfߐḐ㼶lhև29JƼn`i, ( ",YZ:76tjt37.V߀hq48tkي[vlW}znf5.GcBm%~T@/ >A$ӥ&**9\w&g:G˜숋Q>n@?SCT$=h(O .iMH76 rݖۦ~97LL'[˕Gl帻k#+o-_wWaBB-d6e7 Kv/YnzNPS_Ǯ^MܬsIB0m50ReX\Z =G w(*MCxEHHN!(@˃6H5X,:(KmFp."$eoR>$@)D %ɨ'،}+|kי$y ,K8];ލg"}F?̞1=^'*֢9^}񈖪fO/ON'̿?ZTI(#C[s P̛}=aX0!GNX6u¥(=pDP.y N;i\ 5I) Jƀr] %\HJ+7Aft*tC,7ٱp,`ddz量go^Vkv/"q/s{Ó򚝚o>(tɈ"7H<(Q!8 %]1ly&k_c4.&{!(0+@2tDI(]:z@9ȹ^/Y,M-~sWoB^aBn):[ ,ɲZi{-ՙt쑁aA7Eu&,ܯݥR}ˁ7<*[DRQ$P:D,قdfcmF^|Bp1٠ EF]#)6g8 ݆P׆̚c&fں+9भ,!rN:[9aIX ^_+@EP}2HF3$絯ːmI.:Բpn4Lq\B2iwMuӸU݁W8zw=Z?WDpjP_Uj9WJ}o2 %-eww.@ȴ~( 6/E V ه5&XPbԁU9 Ipk" x_$$c pm#G9!"m< ElAdK*Eٙu6e"1vb6#g 79]CgԂYԋA(]IMܚEjt}lZYz]=sQ.1wOƜSZ4bb @u;a+v [̎ǥIդE+2K'k2ltN|P q3YRV[Ť<%{Cc}j\GPuѷ~|pC,ԕKiRW{T$}H^U3T`0ZK8QՅ;Ho=p ߫GY=ڵBAg t]l{uOHK8FNjascIK%2\Z>bVd\W =6+Vk1v\J2z+gӶ`EgR5Nǎ+RD+oXO'RU6v\W v4=EW(+EWftUz NIПx`>[.Ӽ ?BX*)s(k>r[e2D· b:HŎ 0 ǚ^R" AaIW,טTpjWZ-VUpN v H>v\VW=?RN' V$+PdpEj>":㪇) ZL:gcgq\F@B"Υ$aj>D%e\= ާ2zW&\\)T*b=XΝ>z'KW$Xtpr^+VEd3z+/DZvl;&%]Zwbe\=\MNP ]*L8wֈFTBd++qhkIխ鯗}ј}(.n}͈]z:>1(Iٴ޺)}1ooNZTYp苳s6ܫ` O-X1DFk_HЯu*T7Fy|=ㄮnp~uEwskȷ/8RԓBW#bJn/0+w>wxy}A̭s%G%UTknzt{TGvSv9ym/##d#.geYm>݅tT7A+e,Ž%h _O{TkiӢ"!mQ.Akqȱ/I4j-eku)J+A" 'BU>Ny:_on88m6vg[5_Ï)]=dbכfsۿܼm#7/7,_&,|mU) AOF/4ce yH}n|ob+\oŴKH8'ܺyrKuBY-H&rb(bjJ9ìMJIxsESdVq*MNWJJ!!\`4*\ܮW=F8U˸!4ʻp%=8 XQɚG]J2WO+k"c˕6\Zb4"㪇2D U>\Z=X˹>ʂFIQb2vVkm"`qk+ FH'wr5+Vt*͸!}Bb%+]Z'1v\JWWzGUݟk$ 1BUڮ]*2\\錫G!$ $}>f nl-+4 V4˵&LZA*QbV&++]*bĎ+VsTG\)mcB"F+kd*bNƎ+Ri%f\WLlu:\ZgUƶ(IpS V&+ >\ZTU# jgR "\\H&j1QVish]w~b\R{`\J;}ĕCo@m|ɅdrWVE+Vs ?EPbڦ+VkO{WW鑷׭}c?w܊)iy R;I>"!yQVRUsɜdYک&[6Cڶfw@H{ũ2'/Hc:D-Jh "*MMcM/3lr$tpEr&* QڌJ &+lT2b>UjqC\iv>Ծ"t2b]֚q*ϸ!P8o0!\"!\\R5X͝>H'+lL:+|*"XW=ĕLJAOW,׈Tpj=Ď+R-f\WN vB&+0cl0q\yeRH0H X.$jgϻbe\=\MoO6o$Uש0UZTR6W6Ѧd ;,R%R?eh8_=ʥEzrQWMe Rzu}A)~'ky5g _k: U]TI5_#_Vv\,fCv6U5Iy},ƳC=&g٧en=̂{kĽ]y}+V]K~jͮGT&5e73vME叼Ǜt(O͢7o.Kٺf7-zjh<`X]qԩ>{9we 8m~ڟVΧf~@?]Ot= жzxNO._ɻ(rYq[>ȍ[jgƷK)VԪf)C򑕛u|k|vArud5浩'W/Yq}5&g*?_p߮QtJ-U|BQsWPhIգI'F WWc*^Kmpk7Q=lC]W[߸Ok%Y\,.*rְuޘGu8z>U}-Ë/^_]O[Z@- EѲBFdKFyByKcVR`5ӥ/3g NwS7U<5Y]|bs!k-,>k'CzS&!-1,Ce}`~Ql #3ncb[^ʔlJ%ݚim]jUQg~3J ݃|{dPySCJV+~ׇg[Fz+kMzKi` Wz3r󖚘>>cZS4XδV't5Yc4u&r[̀5 qq n<exܸ݃% b[\@^.k~Fn!o J5~8Z0#\ܸ݃%J~c%VVДE5=_f(W?L6,R(#6t~{is Rbp@w 7niEcш >Q<U{7{^n׍{NwtY;vzu,ZxBQo)rdK[VP'[֥Vn(O}V[Eeؖ{_2@Q >N,zZNj?h]rq+P_[Tp{ͳYT@K[vy֊Ewr VjcQѫP(:͑;ٲ],M kGQ.|ŃYGIw<Jz~MUviv}1rpV/ 6VW: 6 2|1|QV>ER"MRcKoݩY}4Ǜ/{GUؠvh극4%U @x%;=/|Qw) ˫]iz!_j*x@^ŷ'7%ϕv}_ݾml6ۡCo2d>/>0he+c@4]b9U.@x*XkK|#P9ŇBO ޗP$eu븻lOH2'ZyZ'vXʢzRJb#_LaTieek0[8ُ-j4"PPޢ.*ڃ. koXtUZDoV6+{F<~2׿=?O5I>D ޚ8XfMrYϛ?m]CToVbY#[MY$KUN'c㡦p(M\ᄯESʎF6ѢSRtuWШ ާE5?ml,:p^oض5_kNĝUJ@=\8[][oJ+,q/ l,av9&ȑ,E-REn5I> fl!WwUXRu!NFFOD]`Y"ܼ 7od At!Tc B>SA@T$"vd750`rȴ+e0rN/,P2c&$LTFt&ۓgg j'=1cM&w aeĬa*܎O%.} 4F"O1cfL0@dX% M5TIgrO1թ̫]8 )Q8CuUA#IפJ%RN2',Iq%@whfR9f.ՒPjNϏO}*H\j`\~s e*J->o`jY㷚QDўRzcXkl#g-{3(yv=2?_J[Je!"K0߼?yh*ћq!019Z'. Ƥ^'l^#Æ[+LN_J&F iq;9Bބa%r_.ee0UEj675iVfn:)^2XI,|ИjC$u|yW଼!zs{a3Ib$Ab*0?ʊND?u &3d$5=[X]ҽa>~C9fSdqܗ85WoAހ;$Cqj\?KO.DPY>vW WrZηeRqv_MK@,-_ٻ|>[e6]]OKn"]j|XStHSEi17(7o/@9_-z *Dψ_|G.`m*@<({;F>n,\olpYR=!kEyYuyZĎ:+׿W~woʈԏ2 Q(sIIu鵹ۉhÏH]:mrdg˛IAkE @Ib-CwLH;k] *©i#pX1$ lT{$ rE/U jޒ藋\,nR%~-\+rOm2Mu~hUN@+7Z2?i#:SoJk[d ɭ2GY#5!>@@[…4bD^f\ 84L(Bfx5m&o(3Q!y(I/i1x 3`#35H497Cl6q~ƑlH\LihByQƅ{xl~R2W*&d`:©SL+hOٱĩ/JZrd6D!J6Sǭx̪&g%3Z5 /qt&ZXC)m|@ N:JT3Ǫگ:8V3'2=?fPfQ},L@~m[/ERŗwn>d;vhzdvmJ4(q!YYby rXI}*hyWKo(1qoōyyqE|-Vk)s1;<%nQ6u+o_Eƪluɏ_},c/ mZ+pxTOQQH>*n`$6*52ɭMM k' ]=V?)#M]sDମm"0r^ۯЩUrg.UC504`D!P@* y7uuw[)ι)Q~iFɷğI\~>pHj>NލI@gB*Rj ;@UK{HObB꒨rͳb9u+ U i\tI,* "Cچ4˓17!!܂"`ڴB<`b{Cu u( Q ,5, k3ݬ$jSl6:e )k\X1ڥ7 8d&5(T" ֎ A^~)<yp*S$TdCnc൅*/q믐\Z']~HjA/q`Ij;-]\(V(Ix5JbhSHHIJ'ѧIAQ!Jme4!-AzyBMg㋀GMC=1k<_5mVE(L_n9VEa`p,^E3Fl*\T namj/e[isXEǷ(kn #] 0 di $eJP9~Q&mgYDRS]e?>wm@ ZL{]W~nZMEYj\ǀAk噼(/7Ξei]<)%HM*X|0FQևT&UQۿIҴ服+TAkJHsEo 1כb`̄So5FSLh}:Z"r> TK֫e,e6ﬕVB޹YLV0Ø7`l>-/5܀H֊@L6_+yxvԜ%d<v+`D<=.8^MijثTV XDOKB?O*<ܢ咎7WErN<[E,EZ Kz֖.g^Vajoh[2 k 12L4ϕ: =og*VVk)`zg viq C, qf4*1H%͜@ǟ_i;6bTbj3S, )zॵclFXb 92Hj&uxdžM'O-V=sQ,ht{ \΋4;e9L$.E6! 4"׆ġ7ztce>/9ʲ6TC;ZW?rn^;Ox.tx#!A<R`G̡]cx֛ }^{՗k)ZdlB12YGg0(# \HpRpGy|mR:T\0rY+gb ʥ-I@Og<1{n$(S2Otu/٦˪&hGc`D/t赧toy{I{P[~Ыyfcl@U`9?eEcVk]C QB7fO9(Jht?(DLvNւ4(dž43D!kgnۚ0 ij_J9}e5>Ph0ZE/äqzWl:_ YmÂpfutac䤙wS #!y s @kjdKB4,E]lmFPf깤 5_V^Ieݘ8GCh.c~QHD 3%?+<}oj]ѡݲ'wYu bcTt^ڳdl>'﷝ȱlIyߡS]Q#2`8z|Т+58bВKL! 4U(b)C-kh߷G?C_/;.CAaߩ(nhwZ>OzNMs9F/x$vtdC t;.(L )VaVv}TV B1[w,$Va2@}Ur%KfyX_1AooWPTȽSDT>jzD* %ˏ_x\H9bPB%&_S %℅m.v#>=X֕5`ODr9_qU9X'iϲzmW@Di8]O1i\n,0݇˜ y´MLc5pp On9N]Ǘ؟?G[9e}bGcZPՇ\ 7vނ%Sp{ \hia`/%9=e hk~- ػsďc0zdE=3) yTU2rdţ><Խʺg&fg96:68W0Lj_V{mF(MӅSօ3A`%l=_(R% ZIlHnv@L`4V K(*m,qY-\qЫ4=(wJ~oMbO?}މ%UM͎'eB:>5Q^4Jc L9.UGZzw Qϑ"`ƠN+\+晕sm5$L uʝxO)%e~W+c+0Yr ƥTRp4{.5Ռ$_N ^hnm\smH:'z;0C8wPk?"ee.ץL=G,8xъ޷N g <eQS'=1J['r) UOTN/X^8űU\kUu>J5ZZ rէi0)dz$aOB=hcJΓ2Kx+Mjq`%"^N H454nvXy"bQ:q#{a^LI12t_Upb,Q@GNyZ&95/ a`T g= IEn_{'vQr@rG.HsI lrt.)-tcc*8N5`+U2]}7,X_(X j?Jyk5v8ڬ{@ N@ryqw쁴j&۲Ƨj+[?H.)_xN[V}[z=*͠ǭҳHZ)5TjjX :~0FWswC?swI tLt<IBܪ_`^0M=gN<ǃ\٪PQ /b1RLTU{ղƧ);<5RZaN?+a0& rZy-k|I_P{l05 b lV'd4 Y}A8a ?āaPM1p+LUjPp0@4X. !QY9/r8Xkai}% %;"CQFP}n4BkL8v ->cn7v o<$+ز .Q}U-zVV_qx;2!T=M_ &!J,߾"eMt:^b;Ȓ2gi ONmzƼnUJ4 77v${|h19T]^7C>Dw׶{3F>CJG 4ž1-޺ևb`cB#ߝ%,},NdzZ~ɟ+p4'e~4y+Jiu[t4m?7ڰDsbD3<ڦ&{uBlXp P;4\Fw˸1yKXLa73pYq51a-o;I DE?L4s.jG"&u{DLuK5 v:\GMO aR25Tl8WLų3}L&H-SLَԟ9֒ρ=E0coiX(L;Km kEA DNnƒ.;q?/\ND0%!*9cKa'R)UsߡaEy7|T=wSv}iTB܅)Exu3rևr)&XA|ҴI; ^4W$EdnÑV:htYy`P#JzxM6Z%ѴTkyOV-Wي?n~[2  Yy>fu:q *#l^,r\g1v7߿Uբ>Y$@WbDF3PX/G'F"-[_YqDyrnZ_:!넾_/7„+zøa'Py&mmݏ15홣y1=r,Nmā3!Vv9+RZ8ˉ?0af,K7LT>3Ou=.u<"4sx=O#GG b ő1cņJ%o'%nh:|QR"Q:GfRXвh`1k~Ӳǚ_6yH%8p"{ |rYԜAK ΁TF>:P)8w%س=e]jaj s=L7h?x'E#d4e5LIww@i-k|WOyE.c2(_{Vu-k|*xh*V#Zg tF0gǚj!IM]ղO66VxbA3JmRc eabum!?Z`ӓCQg1\`q8(M&XP;Cl<+qZ" gs%2wֵYlUj=2i$EbP5>TXFE`uMK*"ޯx#<<~YkYw 놮Ԣ& fv:|JSZ5RseOF^X)ك0TWStx:1ݠS5p(UDE-k,m׊r\.LXڬaJtR~`5g#$.ںl,Nﵘt[='GYeSpVo-M*y382 ly.Q (!a)йUv2߿uU 0Ӭ)+-Q-ĤCNz="A2b#Mp 5if<}5Ε1TRp4{笉Z';X`tv97"#V.ȧ?R*KaG]c'M:, z+D*Z %u4IllNڶ4)q0]DF(`y)' >O_b{91{.X֩..aTplnZ^ acBRn&ĴԬ'*ȷX4Ιsͬb)S\>8 H mQI8>[[0xSfaezMG.-#a4O1O(ۨyxDQM(Uo n㭦Whܴ7<-Git(VE)&W(IN^ '}=i;\EGoWuT`L4`/QĆKh:yQ,5JZ*RXd#Y  H țuQilrX٦9`{ӥͦ h|i˻Ҥw ޠ8Cz H2|Qȟ6.U멲;(n'Um(l]{+vz@wy6I*U._ MG\$(A04ۭǞ"#@ %御l<˃_*dTG맸u*gs]jKkP+l\>' 3ܢb\EԺ"ƕ[b[\!x?UaXE+ZУ m:a8 ם(jVX\vliP<;;F̴y}$/p*A4V-eD96;~I\ Z {q~ZTnepónpW`ED.7$\-t6< d =,((B'9[m8_y+"CŽ'*pإ$n?A,qǣj`cV ,6wW" 7mU>]ע މrfe;Q5)7V!MjZUilMm u'yѳY_E9lvx.f#E, ! b%* %اVF㝲<tbAy6Ċq6I~Y6)wi_D'[?tl/[ ј#A҇=1AI&>ӽ8>+Uy!^ dc-zHs9#Yn&DWFb%74co'N3ӗyE,s?|[ixHP?.xvs e!G~g^GϏ'qwba},Uxx/ {Nn,:l՗KB݊۝""ċe6r\X*ENx] m;Zn#r3XT/74R B;A{-]" Ҧ4< N ꗱ2Wx "0^e7?'4BR2FEEɓw/4W{$u(C.O,a ˘;\00Q(l'U($H5Y̰,-7aLuRn]Z޺tyx)ioG4qh5$Q(Bi$yF I4@8L JQ/|ztgdFOm/mA=Xi.c g?`Lvoȟ[U9_'` p 6 Jo1lpb= $&,v<һZ7C(gd?Ϡy8)0pdGNZ4B9^sktK U#`;9\8H'd&po)($C%ݖΣ 朳 J*,o4y#o_;nU"5a'To%E~4zP.B%<*G-kCX7''Au4uQqQWvu odSW?ڪ~ `dM9V粤M>gݜ32&DqWBXQt 6 sKI&h_9d !ZAlo& ^ʗd/lJ5e*bMt 4mǷ iԧvKl臼UDtͥ4f.cj1N1,0&@E!6\* l)Om2iVjO^t\鸿v#1tT#_E w%s*jW&RH,.'}2Ttw#{q+XS֏jm_tD=OJN5jr?Ոy tf&۠AH9)맟{6OdKN^?,zI?G^`ݝݏE®BtF9_.TB~̑C/0gו'R7S#9h,)tjH)TeUPB"8Oy걄";*~e0Ui N +c%yV޲3Mv$|˼9e\܏U^R>nM tfe qibs: E0 P76io[cs+AupĹ%w{$Hć;{[c#Ȅ .o1g!|ŷeUUģP:@/9ِz2~jڝ\n(BU`LS+L5 ;WZ!o33 ;" ԈHR*dR[vò(5U#<+OXI?iVrVq-6$J՞>w ṁMvJJ{.v5V-F{߾G1mIb1%Z5]NdwEo:L#FUBa7)lr[Z͢w&1]4+]"}j,=l YWKhhe9:`9άk*.3zxL>G6E)%4Q.JZ'Up3l;~ϰC:Eu6WuGX!Iw4Xcnܼm͜3Ԫh =h,p[6\u+ᘾ66ݚ[~}7Y$-~&i".VX S#NWD1+/9;`:% (فݮIV[ΎU]W ,m:ZpD.ԝ()``"nVTv5'+x!^vxy/g"m4 g((h6z 'Jh:Rb`ŝ huG,VUGZ.qb,>Rksmyjx^46oŋU"-X J-%%2˔VwWR$nWO.aZ!,+0Kև7ᤝ kRNzM4qN퐕^T;nx*ړ7j#u{pW lH9'lmǭJ.uv Z^AlpvȐ\E0j]nZJ* G1e@qpGHJƨH"vkn_`mNԡ 1~e7,HTS/c 33b_.$iB Y,J{ 4F'Q (`Mk&BH~}WP=TS\YVTSΔawPhzA%Uۋ(ŏ"l(zŖlh_7A9^ 0ZDob]bE7;k!5jl`:]df3{j,^џb=d+06X W66fw58_sûظuρ)۞Ҥd5x*ÇQ#|b0 ,@ȏ%QN |~(mH_uKzU^<ɮcɅrJh܅&ď?3K+FZJh89e$em½q6q7[W~]E mR%4# +: =HL/3-ƜETfqB2?I(qUq$@OWЪ*dҾ筒qSV{96)GG \1 86@fBLU J9UPUN/K'ݥP2XOl<;6 RF).#DvCB+qK60+ZIa hlfq@z3 W[Vrύ4ocݠvRGi*ؕe59 WkGZuV̪ݺwhYޚZWWɨ'bdp.h5r2leuCKHfuǐ`Q}ؘ}~QbۗY!cXa/l`hbye0!||3̃]jFpIV.! a\( x!j,+;c/xIə vQd+̆[VdVó"+z;^NEU ߁B%m\3K/Hb`O :0gojL܄N%2>\<\E ־N*ȰN 7U=x!r\W~CC]8I0ޫsЦf>I ;A4]TேTDlI} Mڲ68ʸ%>{%[:\o~(j[1\,!pٽO9Ɓu9+^g|8oSoԃYȣ`"L7fQ'>Hy^V3,??xG/?g]yp?4|*o9VYKXw=a2/ڦj GS:e>uiE߼ lwl,`޵4db;FKI04{#W`lЌ c Ifc*(pɌq|`/Fp{FgN48r?ŢP#-=M:w$ҳV;Osi6^aϖ75?QtS^Q/ H:)`NU:`J%aY77#Ҵa坃)k;(u J*>#y@a-ExTX":7wǗY+DDpMd]?'"xZv.ӪsصxeQw}_azW(޽A3 ܂R>qA1F-TUddz)r ܙ"G4=W)Ip_`Ǻs&-5!\sFӮtfgӹߥY]uL a=wz~'+PFt^EpbiCZ `0kG|u,J}bNRjg#>qGSb"R^6}8چ|ᕺi Jkh808M V=L(RLlُK7K0-##'a. a,8qV-' arg3c(=[JHXtKh zv>N 84* UH; ÖhFb%E%QRjMF[DLJeIM ֣TBΐZû;( .Zx[RR W cIed,Y85.2;$"\"|ӵM43M&մkOъȾ.Y|y|ig&Ѿ$|Deuft#0Xkc(kGX, ֯_\㝬^jm&Nˮ( $nڐ2w{R,7/&F5֌0T_, 4`,ghiGgp~IP Ibn]1n 8їiYS "rX@QM,YHT t;JCūf4O3qev$OI|ip$=k2.aI a2CR=_w[/*6[Pu2Rؖ%3l:]`eLsкٔ/E;_#fчk@0';Y0~/ nOwNQi/E1*99rc(\Cc0p-\3(rTCc7#ILMK)T54v`Rlecy24^I7Ϥ;cèg+@I=fa2`[_ϟ %@(+.&f-^b@jX V=~˟u " zth~2.WuH%yAs2Сpo#|3 O~ナ'F2q<-8`B`c#є+"%{' D'6{++[e,U!IKJ!`M,c:;OTT<\sno:n)|j0?)P iN+n?(-EmZ[pOAL.bU?wR(鴂SRBwԩ#_mUv,agq%"8dήgm4!E>ds9IkKn)KQDsϽ&D``HA@Z`0m.$)j"<.aU2` J1`mPm`#o{htgXmMn2=Ogdlg`sq3]|魿~yح{زw?m]«]?hUg .K~y:3\GXڷɬXa$`-U\g1.8pe$[I+KWHፖv=#pcInNN R9$+ ')XSԈ[{UQ;HVQOT{"23k(ea.v~N<%4`iUy"BQ8+1Q+_:cT2+QT׍˖z*=NmW"<&ĆzJ 0 "3 I+8x,Іڠ7YY$ Lvl0LE͈h0*ks0@{ActSq Qw+Ϋk*FBk0I,<;;5rR%2 eF p)0DW.h<0 :Br鱿)iD+۰2ӷ׿ Lj}''MZ, 'jp,A-(TUZ;"ZkU3/{b0!8ěy+{~"' Ɠ;mz3s ZLBRBәQ.@ ]YT8M5흨vy~ӇjJuE_O/*U|>ej~ Qĭk7ܞ9:!KOjdrbZy\ I9y-!XĪJ񃟪ޓC'N6Pp#Cڀqcpf>HjqE i)w'Wϴs[[pΈg,*B5Jl {:idsoF54}sRHIa(%֗&σk?KpFj rfm쇌Vefǁ-ء3';0vgkUeZA-_I -$i* Uy[ؗWAVƇl.AT&'>= {| N`P:)*){AdNS <xipjSH dKDOfmN$7'9˛K(#'P)ݲdnXXUcj|lJ{j"~TQ>&u u+Q'ͭ2iSq8!}Q% Gq9:Ѥs>o#,F>T]mI 7i'j>ɷ?Wd/tɜVTRSâ5^*o5!ÛegRBb+y۵dS"\O6)hI===ރյ[-Z>xSE kE} ovS0[ɩ" ~jލ:v?9wliLt凂l`_D!&kr *Ufkn9cPOƘ)=eǂ>E;.cm鿴+Bd N\3k-C5\Cc/[XCsCal-o35ol(xeHm1X55,T@Eke ɽkm=RZ^ n9qE {u˾bde_m^}4%{CY읗%U8~~囶ݔi٤;+S-AiD7Ɲ&Qd{ 7Qj@Yl:p(ܣ6xǽ qow/[EBTDU`Zbb\V%`㒝)֧|Κu5k :ѤoS;fSٶb rOxqm^x,Ų4nbK]yOܕ>>w4 u]~Qʓ :+gwMȒ3l6~__{NQ)1;۾۾X}k$W% l_b9$ ֢˶/e=ǡB+`T)*dӚNM1F6d;uIɷ֋Aq`XalǷyz?ٿCkU` \d͠D𯇝1o\㛯Y7+ i] j1teqjdj>@b(:Me:%=5iVh1T` Jo%އCUeGW5@.{'$ōqJ)¡#:$Js`^l!ǣv1dDNĝKLQTUH}d7ބDSx4 ԑĨEVVFt@};Y4NbrAI4t[mh;4: AޘX|IЪѢq賳ch Vu(Z iZ갬r" YRefoV0vf]Tiii@>(?{foZf!޼H9W?1@ܰP+$tlwSz5:a 1 :(1jr2U= D+`@~bBH)CcX`ـCr"3Af ņސm !;!!7xNK} C1 0,ơiVzYN7~m K54%FN%XCs l8 gҩd-: , (z88QG6 ?Cc[ULn33d_9h5^q5^qX8JL8ⵥrEU-/?m (?6\8d0@Vd (%p6ch .@ (CSDqByQbgo] ڑGJAe-T4 ذ lV-[ט BMzK@<%GNi` U sBS!z610eWYg{!$%H|Q3.=@VX ٹZh] SH$6cd TMb`dtۍC Ŋ4 W҆2LD(K)@~b^0ؽMw%ͷ6ގ,S^=bX[HÛ>|u&xOt{O՜eͪlf5fVEz3ajR {IŜ#LD1nwh|@E%$kTN5 >+<A \]^*LXm)88/zoJ $ ojբ<J$8h2k)r {HB|wwx|ٳÿ(ዳӧOhud#~59{ǿaoʘu|^z_ۇSlҹr Kl\Xަ] 'ͳNߟ[xg x0o9Pe61Q>ëޞg7'x7؍9홰Fٰ)Z]kcokqT[z1殘(vKpo;ۚ2L躇]g GUFbͺ:0ustݧD5`\_n@|9Xܔr=mpQFeo|׈729o#;lcG7ze"n=w8 g/[/w݋bC"7RƗ g*:PF/5|5 ^m+%{ v< np/d17xTo:oP3}(]7{CVBwOhRo `?;WUɹ7*f?5ML/WS9^'oߏ2؄ I$ K~L\LXq*|Ґ󈘿u/x$='D}6aWϱkM0/'Юl'5ЮCʲ'kI#!5h^[%:mMOBMCW6X]C~hHOk͊) QTC^r3gc!"Y퓃vјm=*(E6&[d4]oB&ý LkNH&M?;}~8ۯ"hoyh/(rȗ#ҁ-Xxc!<1)Lm*' /z>rmiYD.򓩡^@HiǢ;ޟsOEA~Q닷t::Kmwc㨿o sXy6R+#OtvC1Gе w {g8Q?AmK6R-oKL- Jܪ@bMTm5R ͦ#L]|`O̿;lHH DJ;4bӟo~C~/nwOx n\y6q3B FGk9rM]NŒn2_O8jM tw fL6h65­tDvϑPP@}AӌIXĈF}A"8[&W=͑)R{z"'<只=J/ x.3ZjO |Af cA8,iHIݧF]ۃAP#XMߝDA"TGU1+Y\UjJ%نI4f,f631iLq$8r~AJnAo4xf]:ks_P! 0ءe(ZYc2!c  IrbSmU&6MJWw;2NϦ9vjUm{Wo^XL+!mU/I6BjjlTrG5h=gȬ/:VY#:2DQ k5rw˵O  9ڦMK&ֈYÑӨAt5I5I~1@5PHםF&AW$!բ 7#jg3y2ȴi$zGȈXۗ q؟]y6IYu;XG?96q/ݳ,ѦqO\xs˧+y ^ Z [ ܰrn&ms 8r;տOo u"Kx)2},xFп* mF`Lն{6#v@\646d<-Mbb{5UYR7hL+Ƽ^?9˽®gO~kz|QOsYLҲyFғ SN6 n t~Q fތY޾;g(aG=yE1i:虧t9~ 3ze//Dj7UhJ)!`2\~m[ f@Z` B`j˒c2YbMɧV ZL@d3'SנC UWf9-%{s]-\" x@9w<?7p­a%.g= ";},r._ǝ ykLj*ZYj7NY&٦K򹵬jNSILZO ck@%,4epw+Ep )"*23oٰڧҐݟx\WKr)A SJӕj\԰NFD:?>YTӈj/'LdIu3F5אsYR5;~rq'՞}^4VڅlCf)-)%T;_c9祰Hn:!6fxY\׍GU$EBiCWϮiT?޾c: Ē~8(_%'܎:XjPCX6J`Ft/A~+6(U|5Y5x$cž]3YxDR8;hArFz{,*2hVP~|q^.Bhiێ 84=_RZCs7 *jFWA)(QZ7fJ})H9ga۠zO~4t\wQ7Z{so'snY;g5MYUkXi;>#ڳ?;ӉCr0VpSkm,FSLg$.MJTgΗ4&$&CctTWߌPnGοmܣ d8D~%]hWo7kVA&Gu1^`IYZb%s&ͦ*Vl4ssh/?}Ez"0D_4>y[!E~۶[&aHPD*`]4MJBŗ~l_*ކD Q1kb,E[L?ٯ71ZL#π*]h@<Χ OrloÜHam ؞^>N ;6|n !{B7v^o$ lǾm盜ԃVbީ!`;Fy6>e̘y#[?[1r{; >)3mֻ]{qk y68F1Tr ޾544o}3`)ou x9:@F7v8!/A}pccKXiu!vXl,8-+'_8>N""G}lh~wWw]y6f 7(N %ܽf:y %~bL`uU0Uw02 Z5j-Ʊk iFWu/I'r.;Yadg5hHsaK>l}cON'nު{'!&ql!01^ϑ;^#Mu@aҩ f+xYVV&V:y 7}S_eۏ2^xW/tZnbg_?~fF ^}8㷥W*珷w3G(dYJD;Hnf$eoQ|b8Q{NXrR9M4Km}b!.bѮYKKnsm0}I ~ˈ^c<\*=L֢c GJnqci֬TLxwPn< F?#=ͦ:5 *v+1Sv9ak@̲iq0Зq>hR1ތ ui`K 0o@@}5~pC >xc5\}YÅ AlzOGH:q9Z;#WgZoCUUhpj rl&ՎQ-ŋ$՞dRm:id>{I5'F5尋<{/.ITn}^ҧtrk̑7vQ(X%0o޻@vv?9償6O3ƒq%,nvM_LXvc9Q*v3k-n׾G#@4zyAY;|mIڑXk ͒[Ӈ.dOr/VNvh@QFhCs_S'<Ϊ#G,\hwIq!&g ""<|l{~=hv~ij(ꗩSfVSŧ2yw:uRvEh7s/}UZSctM 9۟n xQE#C [F TnR1`ܾuy7q?N`nC@!h +Co<L6Z0gk*rrcD#BFl!7Ao! (ҙL[ +6pPR9ϝ)&N.EroĂA͐S"'CefŌؚ.']zQE̔1 bK0٤hcAQ[J(L5c͍A*b fȘ` x X!U g@'A׿FHU$NkCy"):[-xP#J!&krճ~H4lv5yB?!ZM AzuhXŪȬ 4.NME6G3FB()"7۠~5B?.A()v XRy ! R(~ L1u72>M ϐٻ6n%.$@}Kڭ:ρcY;Oݷ!YE)Cʶ4TYht_/j›47HhUC eüεE G)/0D.x tO|lN(~r. =9%sN%/.N=xwGqď~}0}Il/>ZF;ASj8>Ӗj.PtHXL;%&Yu-:8Аu[1sIA_r8ӳD *}P[bvtѸfb;Јv-aoT(N"ݾJ%x%D{Rx4:9ѯxri!'jRRN%^)Hajs^*eC7i ?6q%f9@ 5z\HzH`4*6М]6of'bF.-E8-KʮL=ɲ\97&=* izPuꏞz+AהzG+gIqM]So>D Ÿ ?0O'ݿΕF&QFl%f, }L!)8a @ԊZ3m^Rn+%Yte׊V.o9&$&\tR'\ ^ Qv0&O#b{\kK=8nQ5"yR; nS؍b(V'Ӥ&݄b9sq9a5fƥ@6ZnfS {0.|45o|Y^|%sZ9R֠.f1`(] 9y|ߘ3:$syy^hiz1tzuH/`ǧoL~xfƈnAD-'f,R Iz7*261*e(.R99Kg䳯'HiډddUfm!aH5P-:P-S,Pnл@7Ox4>Z/ZRv5JEYy'W@^)w$(f {#ۏ/o?%"c/"O!s΄TN/KMKBÕ1UZ,} HXiCRLBWQvz. Ŀ}WW3_]/y1WW2FtdŔNRrܥRAP)Ox}r\6ebjok2|0Z&;apݔ@yBߢ@ Ƌ\4ș:R$ԊHsE)q$FD&rfq`^X[* ͙I N䲙tٰTq U 0UJ9ztŠC? \K}?lWs.(2јov4VУVХΥs/l 5TmPP{5&F Z$$ 댬uIGNn,9__7rčsa1POvX}&+7L5]ahA.Rޕ?b1L#C$iv+ cn!tv(8JŠQ%yN# |Z4ALOu=\r.c QZ烙{\Z˽u3M*N=L}k-}C++Srbqo>fwK^@ @{v( y&GP VC.Ҋ \aϊ D;חZZ  tN..;1*!Q A>0$EhJb^0l7ɍ"Qk{1ZDTՇ޻Ϭ9af8\ũy$ZkZ2V5tgiҜ]ҏjpѝ$3 2  b2nƅLїR@71F :q)lOI4D|j,%g܆۩H@lcjA] GuC-U{Lf/r۳`?(n#Ytedn0):L!&[IWZ{j @˜Ĉ=ḅ&P;#+$]p"D93K>۬%R s;?vWh]|7 jݏaл%/' ŏ3!k`99 z/șgB@XE"|F~]'i*r͗q;A:d#]HyqibgbO_9ΡgȡRDhʭOO+S]FAs5.9գ۝o}|[chM0ױ Go{ZsF$FQRKdЇLtrRy;f4ely! lHO! \oaq>4>9{{}݊:6+m3 A$mX]Y@eQj';}76Ӈ?ffW]c|hjĺl,y!hnUp4;v*rѹ3ed1,Q.rsW=+#36rW5 T 3䔆|~%_σqvr>:^5K^ j7A5 7|}3Eyu'TJW&tv!@jt؝J_8I, @>2lTCiH?(I]] nMEi7ptȒ^s" SG # K^LďvI.L)zrN멾A5fE K7 =1 gv$t ,}2pCSCOW;ApAnnvMP[F"D]:PpvLPظa^9Zi٠VZ44=иj=GvJWz#ێQOңGP!T8@P"~G+=oќ{G?&=mDtu+=z}4}<1^pCf~/3vP1)S'*ꎚZw 4ؙ d_|۪!aT»!1 A/(b&)9*~~y;M{Nd<E{6qVcvR.vH&.Q$̄f.aˤ9?-fOCSɕ 3(?1g~)v1הIlL"U3 c5ƞw?xuL WI9~p߂7ڡt<>gӔ'tw<^3ϔ(K*zwV֧*ٻSpqt# e@{sM\>qM8;Uٻ6$W~!#uwUl_nl,0ӣ0%H9W=HJ|Ҭ8cz"P U㚂UJ-HyO=(%B%0JK&<}Ml44?-He#-F}thy{iy`P-v Evg 9D|M]?Y:c5ZD0=h1zr&!jء:a/EKE[t=Q;X$Q;vYv;NXrB->lcBa>{{0Ǩv0灴5JP;v=f ,I]vlN'O7{`)d9HYm cuȁ=, hQvV{ v@}bv 0(pNv Rb[#8 ݃9`vط;P;v}yqjjku/vbS  {Ӷ^mX/ב)\2ѽ8!QKP-SH$|3=qh;=b̳ąm|"G˻g%,dwkoп)!] AZ[<[ y9݄%umaI۪. ,~ 4=<8OJ%F2A?e,S}H:VUјd U璦XLi,mƏhc"ZǓFXL/M–yg"ޟOY89\M( [YI3xtkKUTK>燶Il)0***[5UDS%Bl"Z*w2o{⮛I.tesjb>IT#o:O(Cثla&{DK]_t39:N3o?ۋ.i`rD-~[;MFBkOS'+>SB w|ջ/?{5H[PN$q89KPk -Ui 2TR4`0GoöW'oi?O*Sjɇ6=$c\dL@MgC]DNS DlSP q$!mޖ/b&ʁ OmB`B[]Y)l\F JԶtpޱ~}UAWw5R1u7T(m(p5yV7cJيgz$ l~ܱoصখ^,Sԭ5r].lf>u mS9k5鴴Zf]6s3LGWnl֛Q<~&M=uB"‰&[9:Qߧv0L'ci*j^Ք myA򣅧>}V{pm[x*[Zx*KK%;n74I 4ᩭAAN5մcX\UO`YTԠpNV`2fcQ]^0V:{(g޿z@eM((Ն|t JC#mU7l\RkH@> g͍xx53A3j6'kکɀ1%zڷ1iVӷ/m+__JuiC@;bׁG$_b󚼗yB?;!waΰT`)!}x:SA:a(|JB=}N!ii6k6j: 复:#Z[q=bu@܂G-U@7h6s{DKSIFe[#Z[5Zg=^>X8/FrIwa.T)9Űɭ~|PvFvy9e>sƴG?B->^$RTk[C]+bS=캚`A0~xf̎Hc  yΨ\[gm E:P,E<ܜM0>|C8vיQ9 }Xj'n19"P;NI}ӹ? 8]y 7^.:c5}=vyΣ Їy jR; Fwxv8]y=,GO^cm[|lysK"3T?n0hޮzl0{qW31Lˍ2moNAsqapV3zl^MXċFĶ2]}S\wr'? 6|/{-]. fއgh{C5Y3_vNzh,")勗H)> tTJTy{WoG<"fyYt /xiyΪݜ p%/9wrm5\Ӕ#S@0ȸ>c-%I]:Avwa x ߧvV( ZvBIݣv=jRDVօ Ϩ R{j7?:NY+r.ߟ&X&܍1O'-~/#Akq?|Ag-CVE0+=:=hJQO#TZGvJ*քX%6DT@qT@Ne[dr7 gn¨ft:f]gr֨,<Ԧ҄2U 6>t+Mh4)&+khpuR87(]q[Si\Y :aCz6ظ 6cq5Y9@,%;oGݖ; p;#+VF,-6e012U_t6Y,QVQ&ĺD6CelS RT"r쥬 ƠΔUT l ȓNbsWXܑg{mݫ3]КO7rel+Ѹj_GQURc5-aS7!^-T,5j%W7ߖ( [ԕXeõ÷m؆WQ׿߷oyRWO|uH96ﺁW-Okx0Am(>/lfwՏ}介`c9yeIgζUF]vghC+A^)_ 2Rਗ਼ER+E|f]Nמ ug95za)~ewG=oo3}3+Z#g{;/5 l{Cyql1]LG(77;&ʼn)vr`t5 {x8m*SKNA(ͿOzfjn)mnĶ͘OSj\#ln(=Z/}8ˎV[z(/ ":f?Ti_t{MO^ٽ^ZZlAKw*>trKh;>u9|A+w)UԖIh&|(_z3Ni><],[#(epx'-gz9_!2:Cla ΋ !O3%Tw ,RT񒲘UFeiW:)cLZp`;8t pd1FB{91b)kF qlQ(І7`޵)(&X-iyC dmFi`e}^\}PטR\+`+}WF@8ħIJkZ+Q2a*[;pOM"hEZUJ]I1(Ro}+_wZK؞ؽ ! 8y;[k`H2)S9KqJ!UຨKYGG4,}yB .N(8HJ$?ƕ W56Ub35cµJwPZk!NfdE#LTGfC-c| q_US>urZU[xN;{|*H+WA>c#ٿ߂zp+/>f|4<81w3Ěl~PG{O&{rC)ʇsKDW+Px֞[~j^`.\ -jgԒkC9Er;[ɶɶe}A^R WNxns++evte])qQWv|и8wlFJRva1RtʖB:;dΗ\"$=>ܾ?URD)*Vӏz.e_']sǩHcn4'܆o<e)B^=j0uw&z0 'uY! X Q+Cd6p>'e7UUN8Nx&7p"EDj uh5@2y?kky :>Tڬ)%b2ĔA:7X#VXKcwO^$}ӣ7;Yg`oHmcxUnuhSȅFM)4/2 sbH4 ^@Qft4g^/(6fx '}5Gw/#c D ,T`+ijC„D( B[eەQX5 9gnZxG}Pb5 ) XHAwVN3J_ k j=Q@Nx%3^ =T i2YTr,2ӯAT}J Xv1CvR{R4*pM($bI,X6*bu&pFp8ip ZBm3_RǏC) 9DoTDKVn 84`Hh\ @^L0zf0/m&zav_^=è I NjWĂhM)}W<>Y?MA?6[g~a?~xV\dܥ\YR'fOy8n퉩|}MbT0}(=\oZB %V BVF0UR +?VlfoLy+؞lHEoQaꠙSf Sk~yA ݐAѸ. [sa&BЫ$4sOLЈ5#1ɂoDǷN$eɂ(Y p,ɂa6tK dnϳN: $2rǝ1pp*g2X A˂R:Cr)'R:x>_&=NoGէmtm+ܭk(Xg/>\l g⦶`" ;g2Ǻ%Wj_TFyڠ:rɩXs}렑0jz(2ZkL e03- 2kaDhY2'ea#"6J69k4FMٛ-4J1a*P]ed1"v{=mAio?qWxasȴe/XJu,mvD&qDu?#8*S ъ2szʟQB)رԾ}(QW E&%̽]+)~\*{vBܙ~6W(Aٖ3Wތw oe;P3׍6c:증ݗhovm]F#ayo99+盍8q8K}ڵޝ!hu0F- BOJeg XbL_ cna\VG_rp  'xrw DR}Ӭ_[Ю9[;}%J+f=T@J0X([Yېo9bj"@ MQ;)HwuBDpn>? Y2Ɨ@,^&A{ֽ!'y϶`|:HR P@#2Wq`_{@ zNp<Э(R9Y0 ]iY -Jlч,;h0):4q . )GrHSu#nƑ4S"G‰,jbB 3D*iW3I06ȼNHPDTY:x>w|!聽̌)%ևfgE:)*IQNuSQi2S$8KRoeZ ):0 :TۀJccCav!K*bwgf)j޸[|2&q_OsC9F]mCĐ*o߅)IyJI>t_/~hS8X~NE̅v4IȲWrdF z#dEG Red PUbZiKBTk7Hi0b@OQmL frz "S2ˬ_RTo 1׃;n!&=?=9QEhڕՈR.}'V7/Vpk~x}EHg;] -Jsh'O(QQdgDiH'%r1Vۭqbe VbC!HJ7FEBhU|D+*˓ 9нm̿J lj"'zk5@ 29nK"90c7r*-K4jCG5N`'9NLj؀H2)RFQҍD.-Kb0w"Zwzd1F4S4ÃfxF( T0aHЄaeR1g{X&T3z_ 9τB;,\1cbclY}Q-˲+s|tm XyHb!,~v)2Xy0w+?B6Âxbe%LA7Xy ̄TTXy/[صQ<?9Ua*VY!J-FUcHva0U za7^+VJ0nމ@މ% K|p-6iTXX:AቂRE"m8n(B&G+G׭ Su(…Cu³!۩Ss"2%jM(!(b'T RmirqJ0JwGx ,ڍ-q!di]V~@Ҥ<ogƳjBF!D" ĥIj1HIiRlV!J$Njhշ:-_eH)hW:wU Q Rr 5Od dgdC eq1'%rbP]IJ EFT@*x\&JBjd4;VYdt~oj,krE`E&6 <1g}O?^8"b2ސW> ^zM $|ϓw@J-qSFxO#鞜LpV./_ViGۗFթJ|_n_*5~f+)HzֹCyΐH 6dNm|r9u^߱]2aYj#"MđraRVѷ^I`h kQ1I Bj6{O4!8N[pM\ ±I E.8JpӜO(&0IY{RᤖX'\ɘÆ!8RD2P*2i:0 ptw ( 4HA7%q̕HǭՆ>#Y5 ^omʸ˻%](9O$]h.Hh鳿UQTnG_hJq֠lճٺy[u[w#JQv۩n$j=UR)x*Ah5Ʈ0i7q ĵ.II;o!\<\W9kJ~zP%b!b3DIn.Pm Kn.`{&dt#9%?fV3 yPcLL6P^+'U4%7*-b tH U[1Z)yBzB,pbǕ {6e/OD~g (>._%'Nx4P?M ԕz]T9+8,gw '_~, Ǹ헆T٠ʽ?tRX{5U~G;KG -a)ia$dc{<|K&qB>λvι;*Z&@EELEڸZLs.hSE'8(|WXU1a@LԦRf ;N(b!zZp](:_f_N__xI:O_v NAqLl^Ϗ-q'ON-uf>䳟0ϳ[>+{,XkA y49]ńDaK$K*2a1q+0(pBVN#ƷK㋓M ܞgp YU?Jܧs?yCC@g?uio=YH0l.^%% QJ@.d\Bb+KNc;_v(8*gCXrĹ:{:]-[#=9d}}>~rKWWnpr}^]k@g9[/_^n*G<;7RnD;w/yُg?/]^._,Pw݊gvy5|H̝ePk~6٥ŷ$̒m.F&7^OgWl .V{;q VƳaϔA`]I@ԠL£/׳J-Օٹrd޺?+3|n"lxз/>Ɵz?<]@\u˳x\Pi@şO_ x=*s钜/O&~0BB(?ћx~SƒC|ʞt0\ƃ?ҕ=`Oہ6&PJ㏳☀h;Nྴt8}=m,D8o/P ?ɕ+/ xb9?ǃ?vr|9{5Bn9G%fgȂ*^5⾌E4L^kdB8~<ƿ,EFG8bRI<!^ #҈s'>_kRqg聫Ʋy=imY>'01UwL;\w[f\67b;.QyKXޝm2ѷ%OJOHǽYޘۡ|O`⬢wXŨ8&nJ;|VH=NQ1層11,Қ QBt_,0FJR{A /%e JU!g lV'St"Φ$R%LTFvx>lM)34)fuђe>{sd;C*mmցH@0pc￁+fˑ r)ayazjKDF=nO3.O+|:GҰ v2bة[s4 $ z,0<];;46bXҚ p˚ГD p`ߜe|4rn*A%}վ>++8 njEylχqһzL(Ǩ̄v]ȧXѼ%? [~϶oq/J׼Ћ%D {Qܸm.rZ< lSΨmrP Tf 0yZYOvcڦF[3)ኋ9TM>Nn6f3g7 Tg6 \Ȇ!L)AM`(Տa/d*{Li@ 2% Ԋ^ P:rd-Od*O4BriVr օsD3 ʲ\Ukzl[D\`~=;M_Ovl`DdkbBh+gZFT:h$+ ߼A $F(Vj/.c?%t5VM9kyI : KU9Cs ϼ=AM2 ʛ>yx5;̈́fG~pS^F|2:x#;x2xLAva:{gteb.av]_Ƴ9fɥGMj; q 5ϕI<:׊ &҈Q%#cR[I1#FO^9/8!O8.i&9Ff8:`Z&Hu^( Y Yp/DDN=H!e|"?^+|#d<\e{7*[O'Y Aߜ/^}]ŻKbxF"نnubUzYF4=m_֪}f ?,TE;=@IݴhTS),;`NRXp):,+[ ta~ %^7갊#}T+a֕0;fٕ0J/aCb> T0ÎD` |d%X( fe`dW 7dgmIK'ZvDI$;GԩJ|_֠ iC#c kT"˓IISF/n2qIjn#q\lXU:߂ PZOcB)A>#0~S(p9o.1E B\T7Nܠ؟},M JGjbFXJ7$II5R4ZTE5*52&0NdDeVʷޕ57r#<GRnola=^̄g:zD MHuRWd2<~%I ɒA@;q0s#dž_`OHK„7ER',4"@9[![J@1dGDEJ*S`JKN,&J&y@V<[ h 2Ql"26RY74ko'G%8dno>}vf%+O.߮P hhJ[O1_b-(m=J4{bBg.n??N)̙{L'1]^E9UhnQG-RSU@W:jV1 8߫` |*yKcdH1x DieQ("WV%pLp;J>u'%?)"sWL?9~B-Ķ(͠8h{EQOZCi&2,38 D=[i{o>a44QHTlϚ ʹXikɸII cOFY(sno&AL֚$$/fioQFt-ovo\זp D`\u?77 R6~AIؒZL@H䠭TKr%JryN; 6%FrYJ ^W:1s[@z\K5|+ "#=q%)xc Y`2H*9IFB˞ ө=@Ny]lrLɪ_1*bXPHxc@2фRD&L[LL'T׊kN1$s0g| 8cΉ(@i 8J_ 1n M^ VZG\P^E.w20m9(+=e~܍ZI+p’$cQe"#Ǜ} (7x >As7Q,P(Y$جSDʈ1 躃"o(~1Jj0D"%H1FG+f`Pr58"dZזZF:YvŎ]PD\cOx]/Ư I}-z,. ^ >rO{D;!]0?_.r)ūOg'0YZϹ|s2I%~X9;/?BNsigw]\_>sDk>alsWaٟ!StJ9ZE"|hEde'zHT8^.ZzX iM~ Pښ@lx$:ށ)  &A]G5NؕX]RCF!`ښJ?߉T+ML#vZH7IAIePAKX-d!:E'ܬ,EfM%OѬ}dJ3?ټs3e_<ޫд󔋌򍝽I|qMgDcg.(PL)eaj:7&%ׂX>9SlPf8NqDp= gU ἲ^* e @gSe2%ԈeLc_( ~m% 5Y&3 Xk#hU,;r = 6oZ(4Hj8%e)(2b"4:Vr1(?-%wF[e̟+"֫h1[1+ɢ,Km:/K]T)҇?Uq mC3٘%e(dCT@VB)!xFؾm1P0匆_ڕiP ,:&@#}ϳ}lux\Ea Ah0 SAȈ)Z?օN"Q9kDqޮ/T)BIT*>·%<H[CZ< 4ʠe <8sE1gV`B%R__}Y`?jB!0EeWWgK,h/>~GvQ:C\@o.2p!Vc_-W*KBńk 2dDB՜Xr@_kRBJHI !U.F;<Z[H-R+ռ$sH*s @ậimQj…HƺWe/-|d9 Df1X+rHqpէax+IęJ ;U)`@h>p$Aʁk&΂i) WTj3iN($E"Ȭ%4 !h+WW E=t!(mks`:/e*.R0bDjM6jvBZ.|Y,47eSً,o c.z Fod;е~wha9v?/U=N>' e?N*S&>khjRA{Qݜ{׹"b?~;c>JeedN!yPҲbqvӃuߩ{, .fw_ErP?8ۈݧ-̆xr9pdxY4W$u /'t574J侇Z2SE]T,vάE+u{vhC 0q!z9lW^6?GE]U(SN Z~B+Ĕ($G-_O kvŔQd2r~=4я89˨P-m`&5{R%/3By;51N618`r)Qx {P@"{rV`nx?I?V O d PZiŀVI1j55$袢Q ܀21'&rQ Wiܰ1JC=NNg4Q^{aQZO{m7AUk**_qq0.zTVQ-m[rڶm]m3C#F;3U~Qw5 cD2q4gev~rA;C,M58HQNA{>=(8ϙ*%2l(ϐG'm#vvX{پc<`cVtv7:Nrm]8:Fpr26H,?F"W][5lDkUx;dD%9gzM)6| >1PVT7]YvI7WebcCT=;{"Pgnޭ%ę2,>ǿv&_ON9B7t]߱j?snlU|1;"e[͟WRV9>ͳDOey 9j=n5=qW΢xJTus=7U5]W?Tu}239z)Z.0]1(ޮhPoR!_9Sڊ|9@>tcDD}ˠ HDq`AY+8^a!-qP8) i=_Lp6eFh{irsɼ|*C-ɻ nwr.:y}|wkNv] ޵q,BeEŀ;'8͞ $*q# %9jP왞2Gtݺ.A J{zo0ջwP/ ӕ6C5wVs?W\/铍2`+QB`=|~@ 5gbW1[8H= X4(>|U26-XkyYTN(f|2EXБ2[Ս1 8Ǩ'>C,?q[ݨE@LVO0Ȕ| U m ۇtva{<} 5,ܦȶz^|[QBˈI H;܏;Sy:Eu}sE\a9 =b-!R="ɫ6:QleԮ^ԫaj96And?Gݨ~q>CrOa >kAJb ͨϸTjnU!R=}{U_ْkHPW7^),f񏷏}uF~h\eb 3 +6Q/1豣T+E55&kyb$MT*ł 5X9Fqui,gF3Ր [yb>kӟPc<5?a]qzY|be=c|pquXO>|Njd䴁hY:TZ'62%>cըo7*K#@z{P&b(=UIU\ {8䟉:y|=:S]K(4)d]x7&a^`Td'}@yu]‘D*Rs+plJ!BQ(%\`2mYGT~ZW l|T}m Fp!bA {0,P$1c!%L9 )TNz56൱cVBYppS9mKHƔ4vKezS9yrU +BP5dO%})"B(6%k*C(6ELhC!bĹǁ5UQ$f!^^R;zʹBqwZPI \1.`gyOi*;\31dɝK9{ | .<30 {QOA@b<#ٓ FAuXxN;qi\NFH&d P'H҆Mg!]00Fy|c46uzsR뱇\)ǡf/z\dߍ]."W6(\Fld)v,({jl**"X> A%:B)2 % MQ j\»9"YkKdLftU}D= k7%^trsW?^/1pwK-Y2xkĸN0وmaT)Qo#ttdN 9ݛ}+]n^#BaP(9DTb{vGe/ FVQgm=I]d)'#Bnm516SIDNg6$G=lMƊKX kx[4H\)CD-"wwcq^QڨZEۘ8x3a  >革ޗF#đ@ !2ՑAM) IuTPdMa&>_U`>, s<@۫k.p*onOGnp(:O~tj'={DQP]"L`e}sM|vy+LZZEjgjrn'O,rwz]RR'U/A:$L0 cp3̑Kll>:euʴK\#-Ξً?S&rVsqrှ|]]0uֺQg&)Fl O40afYO/-AqnPg]PUu^rc'? !2}8CΙ[ޛ)fӜfIn9~|քgo>A%"QmlDeIrw#ݬ U\uH0=1R[חLJ>O*$_ x=4Y՞VST 9+LPQd0b4 ~DzC>ވb$#=!.ݠf ݨ¡ޅ3j4p:i (Ɉ<1 ]2Jpx9!`RrγQU#{y}K)!ۃ#Ma sb{cgRDR",01$%af|ap<.D}GVFqí9 ST+"( k E rId.hK,sDŽMmsM+S9,<()Q P kEъRK]ڨ@ VQP{&=S\(aGj. Wr%N_OBG zj(wz U=UժVj~DF"F_OW&~= śŭ[5Lۇɛć;߻ÕMp2HnjH<1:#w Nf'G}EDͼK?VM*W^*WU#zDla*nav ޭT9S:FvDl :Uڻ3DKl~ҎwcGn211x#Z[1ޭ MĦgSui-<1Ák?iKVib"9Rrfl 笖 Ti%L/GN6TShOQ@aTk 8h9RQo8&9 Q}39ޡZPt T9Cmˈ:ݵDh|5a!ؔWFh`[ rLt6 "ybFs[hM :UĹݔ:S rLt6QxS1ޭ MȦh}ݩVP Wxs,{0B^MX2z cRǘxciiw=@0XuL|:"Z0!׈"P> kaq- 2kݩ&L*-Yu \P=wXIl --C*=NSv$J[hk~sYW_%Z}V|Y2~7s缡y o\Gl6CbOpԂ)FĄN&Ď@o d"|* aAB;jG&D ޵6nc𗻽X`>3Evo~[H%Į/%;)!#4,y^<<]o+܃#Q"J@ʼbXVV h=48D)2'Fܨ PuDX !,.dHD.DY( 3SI6\OB P݉ʾ!w] iY3C${c" ʫ9h sbS+M°1Dέa zȊ)HQnǥFT&028ƺy=~ϞLЊseټ(@Z0񜋌ʳꢜ r{@]UTֲ-$J1hT%% PtH1K"X[?'>{jF'ixZ&4Hh3eAHEA:6r[o]hmYE%T*H J"J@+u`a**b/7rjuLo5Hdr A+{(l;eK?([ڎP갷3+"?MǏ͍4^ϲ BuA3qs:4SϻR;X2ҬrWƣv맳y<^O> RЪPZ㧡9=NqS*vR+O'4UmWz]NPrvF A#J_7 s?sQ?Qk P4[D;D A-(Lx^VVk1밞]B١C5C}C*8(cV+ߎ2h3v1f+v'Mf ̭5d"bq~A#zld3D$Y/Omy) oqUrt=@oo w c 傦S3vL[AU5d)%uۺ]%PcApJem"C ItNω1l! .LȞkȫ#؃'fmY"k;Ԡ1 6L-;]EiY sШP:&N:H!}{=[g5R|WLo+˯|}ӆ{ZXJ Fin0BTMmp [.lszvC -҈ l#v%M(sHCޟQrds,}0t%mWg$O$Gyz84ZȞ jA>py*F|oRugB'8T>͹ FtA$LwyŷɽHA{ɟã=Lyɀ|#KF xN-y:9ǧ'vBw{ț ߻DbG3`Xd_0R/;Q.ï~*Xf{Lhy-sTNlLfq.2՘,TiLs$#4"$LH@순e ˆb|Ȩ2O\v &0cѬ~c!(V9|kი a؆C)Quwr#D(>~z ~:kZXdzɰlԛaHk}`o luHݡin\^|D8HP{y#Q:#}??-}:z?d}ueËR\:y 0/@&@Q}UQi%%09b+1 A~ Pa<:5Xm٤ۻx vFf;gō/{bT+ya02 cMB ScX,ԹRJP*4~LmvjC-W_Bqd-ӶVqb⹭?%1ILYs<f H8`Ɗz`"+.-a熩~$\T0YA͛]`x ہZY%T*H J"J@*.JD< 5BN;krggfEpaimF 2TRZX%JSX[QBPm?fP+?6 {I.9 aoJNi߯jR.'{+CIn+qh˷墘fvoJmV|]IRMAl}3fQ7yEЛznVGV qFKv\=JW2jeiw~ygwyHes+?jڬRvY7\m_/Q5ո+NF"ZRBE)seoZPNC&MTGSNٝQ Y ad̘lI>w$ x"2yӻy0¼Z)V߫Gbd1I?>|?:1\4,zӨfSDOOÓNf%굦hy̚ai(>JEF)omL(p;4/尣GQN2vP!,/$RTXҪI̵,Gcc&% LsN61UoV)w&tb2_.n\N,SjcoۙzucO}&FKĻ!JxwyCUHwt~x7 2?B@! eXid}$AbBk+< e]9!@Le) aBXU-v6xg߫cadQx)p9Ff,Iئwkl3 ̗ךbtc/ttу&Yy*B|zFP-C{)9F|^ÄB>%yM!%8YE;1/߿BX+6z&a;bqngmZ>)c0&4 U&c5t͚U۩űs <jT{\}4k֓wOj5XWꍈ#4j兴( Kr"Ҹ@nYk+_6/P8ct'$Ѿc3/H$S|'(UD6*N/T00XOhQƐQfKRէhQWEJb@y>($BҨ:G6eHAҨj $WXs ؇ kcVcw>RThie!IP\̣ /OK!$)-kdn9qfZw'lHoB,>MHP4HhBe0( ;>~6  dX#m'?_~j)93(5r!=lMMd3Qɛpd !^i@л_Y@xZJpKa)mTArw" Z+o)0?NU%PVv ?W%M\UՑInZ^Q)_i5ߗLϋ̲(IE$U;m@|,0LvղUGkE. !f<؊o7ϗÓZg۴#4'DIF2]WF\Fz%QI/ ԳcerPƜ SO<&\J^@D %$O {DtU1OԱA0~Jxdh )7Hц2Y\o*逋0RO`2)i>ׁ. p,~tu(2 ,hm:!>b^Շx};W݃!v…־Zk޽fjA{q'NqHRmN 3B[h~FPEľ 霐R9pUnx\\m,Ma.)YqY("sQ"yVtg`#A{eN2WوFc1|!@#5=!sx#>mE4>V :ߡE qLWu|idu|YYmh]7DdF$@ _X⍤0wdGaH֎p hb8>FF#|Uvrihka‘> 0d!ܹtxWK^ {@SBq򷧺V@UfK.n&U7`R˵YE֫0Of~iפ5'_c?WU窊\Ufgb9gSYPPI,0!:GLFaT@PAiRRi *' CU.nSf;ЛɶQd@k[P}cϴ7"4N.ڞiͪ$Le\;](Gm#\nإazkLDAx0[ 6[٦3BL/'Z>$XV 3]]4>@.VDŽٻ7[͞0fGH 4$j<ru8`EDSXh6П9}Pnƒm$ݫ&(k ίg>I<(Xdj4IuUt/V h$2EQK吗Kʳz l۲˨G<;B=tL;xbŏ'R{bnŬMp/X$U~(rȐ.E,~Գ9o+MXbq]4;~~i2H7GX@SZc7v/A{P#<R!@3By{u 7ꃖHsKS;w 0:6BAʺgbiubܫ,>z`Y˞"!&H@[wJFEWDr\G huG s( ]491W ŧN:R'_ Q Zϡ;Һ_\Un]j̫ڲu @+}r ?l65-5CSv'Is|L@UQK=:ɺ"EJdYq[û<̓[ӑ[dwm I컿~lIܽ|*,m9|Pn"po~U RYYFE^x8P;%.cx> pD$p/k,9y]G6ipΩPKs T5ؔC\/]4Tݕbn $$_J Қ9E`A>ߪ7=""!H @UJCcaH  oeT =r ?7ƒ -)Tz PI}83xsWXG]PP\ @䜺4ilYJumN)Aj RClxI_`{ >3e-:p]ݸw갯e~̜Mo6_`X;`›D޿_]vFaa&& s|?z `1?Mg!gYVjr:$o6}Zao2g!cee"]P);/y!(}c)L0_Jߛ[+qti7˕3978y 8hO3AcLq<0^!y ~mXeN(wp= (fXrM<3(B"S'*ydM(9Sc'x`hx6rc0܏ 8JBSX=Hp'Zs}Ί#lc Lc5sgxb{irpOCZP 9c$%fXcB '8VRX*`1`]K8w;#l-'a>wo+a)󭿟[s? 9U@йmHO>sˀ x/#3 ߃ j%YֈRaNfP6 CZrBxݞAV͏Ϲ͕Ri{ $pnWBҔ(<@@ŅRi8g1aܓy,(g{XC۾ys3'FG\h0\FQ->? ..Kzp>n]=FNDJ 볘EL3QXb ^’wã-f=J_\dG8[bc"෤XItq%uB%U Og+2(9Q!##b 1nz>?ιc}ݽiHbŐDX' S unR^M>Ub({j"=9 zgM?W %4pFV->W|4?;^+B6)F!~9d}&%BIZT;Gʩ ~ARėCi6ugOCf.X|,,_B1 :8ME݅:uTYw.Ҩ.-~~Bd;vҪ%_N\Ь~ЗS?>`J<-{?oCz B:$T>͊,r| ByRHgx1#Sk/pKhB7maÖUlk,J@[02цZD3@ɻtЧAtaZ7whjG]!N|3EA@}-'Ur'IszSSQaq rt8Ky{gpgd9P1yq~}载zfa6 u,dOc4|p;H ّ4L}H??cߧWcп{ sz?c? dif⮿dxGIԼW?i(B>_]Ƿ{Fb0{𶿒ǭ?]o|)c;;<^ saLSrBo(n5"g ٩*%Rx"%Y(ìL,nЌ[$@pC9I&X+3JS0@C^T??ݿb pUf[&lE t sM@L뿛:Zy\N5]g*|V MP %\[0׻VX}j xmxAq OVcއ۫o͛`Nco7A} 2_,6U46{cUd+gHCq<Zڿ㩷<,>!˲;Df_RVu*t:)f$Q$Gy^:[Zq.N<'3bX)cVXQRZ,8K'ڹD2$yS;MH4 iX2g*Kgy'DI)RZnz F>sw;̏pk{6 Zn _G_쯛[w/ٱwZo3 ge3x xrˎnRb nfrx>zf?0uG;݃2+kAq_W܏ތȽhOV2rR~3,y#QuJsCZ <ĥ;]I/C6KEFpRp7 3jgpFq~+:%cZE BB RbDn2T("P9i0shRL%G^0S .}fBpzycmyc_ɟp9h`):3O<zO[X'G9=QZN/(<F1q{>x(ZxFG v8CiF9 $#rF&l+3BY٠ȿ"_3Cwݡ]w讕dAU.é0izg:zƂ"(#oC4>|H(JdrIa|6KM/@$;O𨄳J5 ^acb&i R&1E-,|gr9[ߧ=:AlVzih׷hXvLhQV|"%SL@q8#q}A)xP.CjgY?xP,6=J~3 {O&)%k"#݌$-痱p5)W3+81umCt5S6&ye¤:6|mlus9{c iLmߦP`Tԙ]lN7Z<83ț(ʮٍL`PSnҟpTB2Yo15)uǩ"Hu*)Uǩ"(wP~q jAmq N%{9-#&p${:Jᎍ-< ]/l]#3lUgB[tMC5ףQVڧmoD0>p t?;Z)h%7>ns cLuGG)8ż??J$`!ۄjL4}&cenM;̤IRN:𑙴OvY:& 1)gd>2SY%f NR]ѮhWj=F.>(2QpmmbzKlX  Z#,Qx.9rg*_;Bֆ!Q1N .UrG +*v*RU~mkS:!QbuVY77wV< qc?L '=\o'#HV= 1E/vqֺDL*Cr0yoGeF@zf)PeO#JO[*Y'nW`D9-S k03iY9Q.Q gx12?A^JP&ߒbôt8D>\"AZm{owFz숏Y!IbǓ@2>I= `Y Zaemp yCW% zfr3x*]vvN0bL%$2OS aŭ'2ȘzJHC9ñNR ]mj2BB0̺*kt@H@ms؃E%{ ]@_pO4}$ Y2@m,,FV1<& 6KQ ,+,szg2qP i˱d(S8B*[wBç S"ƪW#_1͍eRp4J׫uc P;;&v@u %(KP7ި7Jʼnk5[PHP%h2 )5t’0S1BudMTWT(R X'psФ HiD ">bm> èo|4(2 |$JR-ĄG恧4$S2Q1%X8tc'ߺ I0b"FS.'p0&4J&1(!G lnrl;_QǍ)xoH#nj6 g2_ j&paza|VViW'9I.> "F`/ZiX1XumRI్rCi 0ELk.k>zG*۸K s_m,fG*43i0R0@ ăDiA6 As=G!ʭ7r(&e xic"e71mWQtf7Q\T2>V[7֛Nt'ִw79rSڔjf ?;S5GAX ņd:Ku70M |Uẍ c`3i}fX2 $/'pX@Ϳ)ꅆ^ AXA.0r1<)4*rV+a܃R/$7{[wL98K<d>̎D7īx[q/-򻆲b;OղmKWJ=Xd$)A?A9 ?D`r^F bB"'^R~Y%߂27~cX:Lj?pYBcsCz@g?2YKIZ3"%,D9u"/gOݲz^.S{wE}riQa[y;hS惄Xh@yccJf.e}Nu73NJYh~e043ᔕ逗bnz5wNC-XzBTX! _jlMLa? ٖcrn$d[TqF}BB|qM*ĝ[d'AK+RvDa_[.M'ۘɓ6,k-T!~?6%(aWs wI7 Dx j St@v/UurC6a&!BqbCϒ"va"QFxcGC׻.I+SFbX FJ5R! ]'A+%|r 2m fvHmH]5Y &`ڌnUe=I#B%w 4e7!zM0~x5EJ{k<=p`3ՏqƐ9+l7WD7gv5Y)Jz(*bE8`965.tB[m^, QCKtLJr(u˕$|h~/7JŐVΉ;9';$1t;g, FDs>蹊5֐6Q'v*\d`&$y:"geӛHb&m#f~&6e F,g,-.ٿD' <s -ۻg]hwV>&TyWnT*1&gs +6pr@SӉ )(pЗ` t!'yrkj˪g{kjWfõNOcxVyѫzX) T3eהƇHSQefKOvS[0j涭xVitӁB~sV^dQeY3 bi(\^yӜ,gift!xCJ"' {)%rXpjRE\L>)u.l>uIzɆXXdkmAx(QQ:,͋`H1?]N&[2+@ S>X=I\"_) 08B oNɍT'Ll K̞,,f/܉ꩮ*]X! g :@f{>kM((š<ۏAO^TDg*5@DK=~g'V&9i+Դ ˣa{hM*9 Q.8WY˚#&3BsFt@,)9Wb,UB/Ryʣ o)8h }$ ѓ +p6[Uwu.xSK9r.4[M~\&>?wA;RuB3<+n:;:l}e}MnU܁`Pԗpfc?DhM>v_.d{Z 0+IEbn 6iإ12F#9-L12M[v%_1LE8TDP27/9Q(>:b/y߆{0ž6[qteL4Bz}sM,5Q۸P2XrnٙzJ5^ܼgQ`ao]nX`9W)O闻H .Tu~Am&[jsxA ԓ$S-8쑻b|hL0 7g:(V|p8s;w|#>O>.CbBNRSZގQ2Ϥ8xJLjLP4AMpRČW;(?۬uQ72⌳0O ' g$C1ӓIV!J(FSSu"cE[ޚ1, s ĭ&- Ic$$e|My`hvHh+=讉SqSFb.bY69nB孶rפ9eR".؇q0gS-'ݗ$*Fm{־s{K]R 2tOW AfiiTAݭ%kO]+,,NyyhҔUSi1Be?sZ9h:)wSµ 5vɄ+ĜE0?r4iRb=58J97G^zFvO]<=ܰoa|zTݰ*c^ճiY{t)C5Zya«ps 1̯ _ ~:Xt.z}Gh D$äQ5Z!aua?}7V`yfJ v ǻx|Z-=An؅bgS=̐ϋQUoTY4OSG_^uSwہ]G)nXי1[O+:~aƻYfh/oЬU<َmK*{6f6ioVL@2pϜD&3;4Iw.p}GNf Ol%efQq.04W;=ȠMy$dI"F4AB&B(G]FO +s_GOFOcyFՄ"FɻͶd;]P3>kޞ s(3{.geumXha)s21lG%`cI5e(ٻ6n$%ZCw*rmblgT b,Q\Rl̐D 0I$_4 ;JJ4I2BrĴL F87 3ĕǂak,hJc  ;~.+kNiQtRԱ֒ksOe@{3CWg*TZ;{RU#-'.P̧P*E,kfZhRbR_ *qi 7]L5DԤN&)c|YwXT9:#$jf{(U) 0_'N'IzO3oktydoJ`;Ͻ̩/ = (k(tzeln[,0,c jk*V<dWXmPE}CՒ*Ə"S)=JUȖR .e*&]& p[ v&ħvl鶋,06sh8/FyCY٧7.Tƌc(܁N4j"aTHJz(LHҭi趿sjKڒQP!bmEH+[p|rTg֫ ?~{ʹTE7!V )_s>xi;loֿ&o2͏W9!"HĸQ,id3Bry%Rލ`)*^߹l)"Ϋw|jOWa=`ŪqR.[ 4`d|PC"BPitztEШ@cJ\{jr2Up~sJ*_O7^ /'ySS627;- 56<f:U*H '!\6U*6c EK!oˢal4rHv6a-u'mkDkK0wfR9J+"U^pvEKD:-wVkm7N*AjI㦦oj6fۍ 8`uYU#<1kdjJd؜4]SpqX+\"IF Ǹ`hi#F:]̊t됯,1BkS* ]tNjqahh#EKǼ/B\ τ_oaG8DߑF ̈NbEO kePcyp<քZL@9I|m!ҳ՛3L jw0&ٙ|ªhovپ0K^&??+Gx2m8; dëy{".5k޿~Ǘ\~}yQAGTw;y۹z˩h*[UTDϙu)ucǫNntHBn_dfsIc:?d_E8yIg}:l aҹO;Eiǯ1lޟݵ@4 ?ßQ|~|}1-fN}l#k %g-E]4Niλr0||Os&KYgxp HTK_GǕhR;x-tF_~t'5%Y*߲OMZSƗӦ+V6|mߖ y2'S2'S2'S2v`v8_5°0~UfŎi6܆prNnms Ft4hbU> l6ʹНϑO+ŧn^l= ?胿+3<7uSㅷF' sC)?j|U +{8VYoU]aNVƍeN1z[j%n}U{K`$ j=5nm8FR*D!ֲ:.UB.*.aUQ%j.ˌ'KBno5Vö`[֌S\a aV֭ø~O9Je *]SڇzUF9deNci9B(s=}cE)?ƞ]gGSfR |0o9*SW8khד5\zt]T$VRD*N\ }jWˣք:(%[F Z1iOHç!;ΘR04p%Ѩ`&RGdd;qw7_2LYBXDSq] u/nlaci&Uk/%pe-u&_ W lgT5_ ªĉU8?1|-VrfOyB1m(QFKYGبH`%r Ҩni>GSw>ʍ/e6=rڝb+A*Ul6㙢'tN~Uu;,y'3|Z(DSwO@5ii]єZGj*i^:-eQ5AXKOPiGK&= G5CtD)j'%q^i;"(ƫU 0C 7_F-b8 Xwb}̆\$%ԺD#U4֩p9Ņ"cLMR:nK%'LR0 R6uRg' GcR%e,^#nqĦAMU'_3Oд pE5qȸt=zPhx(~& kR F>mگhIr;VYN ?y8Uv2\]N.! / "B8*["WN'װ?|х_qɪagnx"=J3? p<4)?ݹ+$!D0CL>l\'0)0c=,e(:Lz&nY0'ÊoCuUg;\kNy>s$N2:/(ڟaAblXELb(&i HKb"8?☘XDdO1Jp#%PE T2 H'ep22'&RoFLJ/J \(*#%L$&X\,bh1NG*eqRRHbifZhRbX}HeX8M1clMll"5 F R%7fݿHOۢHLRM,E1&b̈́! .3T3Hbp@u-, SKQbhke"Aahk"!-MH4dF@MB2W} "0XE"LZQD`X,!]U QKAs8NbOHq\#KL,MʒIK4$Np}YMwڟ?:XatڝٟܶhkxdW.֒s& =?*IBH VƑF:(LBB[: yEQ_ Od`ry0o&P$id09!8i*4LBgO%-̲xclN!tyP?V V?h# X HQLǽ' 9`[Iv?%LQZdf24#cu' !@6}O D) :ۛ([5xow b6cj{B(ٞ+E6vc9#ͱe~1j~0ͥ~1Lg_5qI ^L(nL̾XżEQ",F(PxS*J1x{Qb0/+3.3nè.}doI}m.ԴԮuB/y!ϕLSX*#>&ֆ>cms%ֆ5ZZ[*7K cfП 1J65kdDF1eyH%5r-cy Al;8FAZ#>Hlx1Jch8$;Xπk цw̬g@eԼ*9?>l0Ҏ*'>`g=.>58g򨖐!*dՆ@F7c3p^݊R95ʄ:)}j$Cp)D}v].{{A"]m1ZA-EQst#t8.p j x c#,Z<(Aaei*̐G91Ń ֭hcƶgJѽ[>/ רumYʕj/WF9`y{Svӭ)n8%T7WM0y'qDʲbW{(Ajr2-|wwE3SPӿ|W̕~\ת/ ?-p *{gSN$3NH_3soKl\b[I]5[ߦvn/t:mpbxWg|ݠ#ˋE%Xf,bv7hK*dDUM&gɭK?'ɣwxc2b=~w+߻ϒK~l}Z'7sZMȝcDЫ4i'WPRZx2+yY½I?"pHfw؅ ǽU-C)9:aVtfrTb뵴/iQ tr稣(U$]klBj&rT2\-햝9#x% sEI{<,~N⍽"zr֭]O?Ȟ.>Mkd9s/JI$_%M/nj)_# /V[nVe NOC P [ضAy!)1`+q ٜU+vua_F쭐ڔ֟u'%Hn*3KFX'g$V;p,j0H.8+Q !ب1W¹ T(7>n8fR=ht6Y?_YP8A0w:AK-i&rԶm~ЊvSTE˃*ɝvkѷdZU5#hW[GsyP :snhZR2ε[.[9rfa oyZ}jUcL:6bbLHApJh!FF<25!DL)$3VM5[ZXjC(([h)hÊĀHbXe NirjA;G_%h@ȑC4Stߒj7IT'2yP))sZry 'Z9Y9Ds0p=p7[.@'w:ڭ(B-٥v&vk!G,L-SGojɹ2,jSHAځ&KMȈE/8cE9F[x}q%Z!#L@1b#I2,",Vqz۞LrusFE^yj[}Attqs]8nhUbdbшL읈5b)]@n@,5[)D,eeD oT,5;>yPDPLJomWbmZFDy" py'EǁIHd%O$ۓ*jӳΓKN.mбF8vmF UmF q*6& QRZ)ԙxR*B~)O4BRfBrҤ'IcO(m7*0M@K?|.KrF*B)5i}dzϦnK1 ' bY"J[VmYy@?JJchq-b7`;9A@&.4}iҀ{ ,([ ׷?TPig!3`|âYzKgys1e,0|| *B|?x\f"M2͡RRv^h $nj X!X6D5Pca`+`W%USeપ1f6 Z6qڑ ңJ[6 Thۼ$@=>nbu]Vil erG[%/ QJz`"!%;pۛ0QX[rˆ9RJް^ 0!>`CQ/Gu𝅶/ ~xGܾcU娔֚wl焈{GsA{Ds"QAv$\c_X~|(1L0e/|<}K%.^^4i"dz5%:w _0| S1,d4PK -ިp!%Э BĄ Y!FW}!WWT風c$Q;ѫޢ& ClbH,jg.rϞ|k0KP;sC%(+tɔ {:Lu0aY,:*De q$0 dvTR EtFPʹeA9w%tЊP'\*O\+l Mw?rf4L%@)\*m3S%*wm sam0 Z-؁RE~F3{= 尀 {EeJ. &̷ncF#1{Tq/Pa;٬3?<Ǐ<݅Y<五8T.CaNԌXpKc[!&/ s$U#ĸ}2TS0Zu뾵">D\.炋y;-<'*dn{esoQ\fvy?d&W#{4QUF8] ^ &-_eMijijDyZ>:0 c)Z< [-$ɽZz_ 4(VvO*ճAsF%71(ׁ蘶3`RJaq R]F V qr{WB:e#?LD 1Du ef)  ws, p0Ry>sHP0fQx`Mm X:0F6D!فEv7ӹp6>E;d ݉1 ݌k,fhe!U{4ȠʘY17W9$ HwoQP]9f=*b{|kFRs{$l?_(DX;p$-wGD(.i7o_+(t6bR*>0頍 Iɛ M.<WKPzw%Am.":}nUI8!Eră|h$"ҋf)5<΁1\% R+A1Bl?H?= ;b_q^Q&!vB'5ܵ9[ R:>j0PI[٩m`H &g|7'XTe|o֜OyX(J$*B"Rd`#C fE !PmV`e0FT]LzNIm?}/&-ҙ0F o4,F勉A( .p::%q c) (vb@|11w/CTDEkl`\qE*FK!\>s> .yr+؆UTRm1)xZagfO3?i1l(\8j,:@xP=z~y~B\t|.=|w}=DJ!tBG}e8~8<*q]@XLu4lJd.m0# <.\vݐ``t ؞AC;DMK'`p`N2$8JT WDrE aDP quc*Q,OV"h%Eo1heNDQHpH[O* 9WF6j!Lj4 Tt!H`8蹅pSYaF:U={لo@g #]{˳P0JE?8yqyJ]W F3 nň 1+#NME1 /*Q( bM`yrx^?{6O%Uydvf:=OMښȒVw[dJDR Hʭr*e;98Q`T2h fw:$}D#%Hvf2XAk)Vn#F`gZ+Kf 4nKA^R M9h9/S>nf"ghTsn<&6=1:,13R !*<"G7Œ#wm>OVW\Vs6 3v^r0E9#5Gk};Z^M;:["0CAt'BF{0g-^F n51a;"y&̭9cf[te`O[Uc?@mp^:liϧF,Kq{%C{;Xr̶8x^lWnjVO7&~VwtvhG% W v>d#A4KsY^}1^y>oĞVP"9/NpJQA(S0YRL-b6IR=Қ7,`bMsAd=.):0jh>lqvs# #hf\rЂ1C"*X@5WVVr~^?ȷ(e l#KTP2W lMY 싙֏aCRɲe4)|1^?e:f譭.d-fɋ|oZcNcDDaQl}4Ƅ$!oXʩq;oėQ])&F})/f`F! $bԾ hh8I#OVqqYD J"'4$T)"",I]b$Y &hbiYSt8B$oG€ SNb8 2qf5lGh*B\6Ym(moYGpWobĉ}; 0pSPܘ (I"RPHjw/ I0C@s#ajRԠT`C8b)A-T(EJQ`)1R0&J.Q=X#XA1PID+{C6g17[@Ѿ p?l  0 ZUj:pO)v9OF'4LvÏ]9;' o*vMhд"IMIK;bf ǡ5o)ˊ˖0Z{>—utTg/8 yN_+XO'< ,6"|~4Qmyָd_~6 *~ SBU^닪C,'z(PB S6;z& ]>|(k9Px:5B 9=rDDB=r(JQSxrNuШ}|肥Jf.i] .W kȇ͆Fu]{ =C `ϜM{^Bd82`iVCsPB;ʂXxGW QBVlQzBvQLBl 嬄gjq@R8Vw7Y9}-WA<ʷͳֳAo2?'?[Ĥ hf+ro˕yl{Yޫyr >X/f2Ŋ >׊K=^C!ϻƽǞzSK>beB|P!į ?`29<|wfML!%}l`=F!C چ8uA[hj u;l9 >MŃ32^+zR l p>86e//f61U#m rW۞>-^탆/ʗ?K[4܌ꧩi~xybVG ~QapWo²V {Us,}I*:O5C&\6-װdp/0.?Wf繉sҎAqyonG?L=9ޑ~ȓ遑=;Jۮ@Z-G*霮B۽& f'œ.vPs?E+D hW\-ӶѶ2AQuNaO M)9 =aYy,D\*TlGy<,9?,>9=֩N& {0G/9\m@89=M"Lļ~4 %>MOkOӞ4֧~=xO_(ڛ_CFEa|\I+ս|aѴvҝ3YލSPiQ¨wm9ԢZRu!iKSzF%?ll^JhR\N/E,wYZI6OHF biL`Ւ,Hթ}ZSTm2@NQ śnhk_r~d}^߇5Gڸhxu"TPESz&;geEKi&K䃠8'u@-:R\$t㟑_YCVΌEʑDv]#1ݗ nAx~LPڽ4gM#0gEUYvK3~ncX%"A>7TZ|,T"yRuic={IYjQKdRnN'HISʏZ Lq.5'Y{O"R>37{ȉBjh lNLP[Ϧ#9TCߛql>ѧ7d*Ujd:C(cOpBVkt+9afPTQN%{vbؙBXyX a9!'WقB U;S^_0hq^TTk\`݊iV[Pwe^aI)+]y2O]ݜ`іWt>Uω=5fɨ`bKky '`6k.QR;ͻ&^kBf5i]S^#_UP}mdiPfFE&4h`#Y_6 ak)9Ǜ̘"uHʎw~4%kP=`puw-]ϳ;DՍF]Zyq;"zvy7]YkNO_>CRqEx˾!no//E<"fUn򴯂阒JdHDL^F,Hkֹ ! Ppfn^M/fY8PTi4 izaG8vq1/8 e(b9*_m ax[B׵zɀ`6/U d4lq^zMk&iXS@89?5T<6NUJ^YZm\^}⛗?|p5Wl+T2[追{m/\8Z.or!0΁p2^(3 SÛ$>x~`=B40w=8> .v4ɹIpnc6:8qшL3WK7M͢ ଖ р M++c/-74ڒ.͢KlJO'nE6` LQ /KZm2J!QZ>$Q*!g8-#GM,JSV2Xpk6Fլpjz aӐ cWyv1h{{:D <)q>龵QI)`q+շgkVYPXjA8FǙGKNɇ %"5$׀j Ն[QmY^2 4i--,F'i”kOw1"TSMq=*"bޒBМ2LaDGXH1 3Mrix8>䗓^1/3fV [ UZ>׋?,h*IV4IN'%o:@ FV41fX*9\hy44He VTk%ΫG+ M[x.'!8F{^Paڋ(y6$M$I#$x61pIyq9^PmV'.4*TaRvA'' iL"04yVD0u`|὾?@QM_pčPdRyrH\QCh℔sqh &t@ .˄~#hs⼘ߖauaFҎ.:iZ0rB\y5v楤LJƣmA-C斘SL2<$ -SA $> B]`d! D$g7&dы$V='R@pjF֬e',Fm0%=,g@<(.0jC:& צZ!D+q4Ƀ^c^#7y/^-:^Tgr|88VK$p8{JFVWbA>(UՏ/.osJH^fN/?U.x~agG= #x^Fb5 ^^ǣ)&R˟/.u3]dgo0Cvqv7)@+4D [nxQ0tOфMQ5G5܈G3m"\S QnY_~({1l- |JU:縈Nh~*OR0&`)Q->٤x%V2퉳e+$~ebnt~zy#Q/]y+oze,"\n)gv6 h_wj9?M2/f4yWUWJymhsy:+B|ͯ+Sun&yHX 46yV]53B]D`|(DP9/0A$w7䓄 rd? 翛j,iS9NΖ]zN%VmѯV2+d=r?_rojf3tydfzE|aiه]-T:?^jSxMx6͚=|sœh{+BH(#Tem1.w6u57nmqn%z#DT]g狏0ՓM;n ~bՎEe'uPT`$tqRdg-=1munOz=M-)AeP-W<8տxr)0B+W;Y|1y9 .yu(qѻ*j˳sFmpOZGm$@3Cprclj9?!~8OS6Mf TrU"<1"☍`>*DyuaWL1"2㫯gdzgdBM2a?R1زE1i%s08EuORL0+g V~51sy@`&t5سD#ȴ+m4 ւg*D´k'Z= μf3ʶ=Me;@5ʶii ~H8RT8WQo{@.j ֏:D5V}/feȕ5*PhOUװKY~rT<\Y:/֒-c|5 [ʒsTIߙS{a6AjڑbjB,pn2dç֘LdKI$Wm{M\6|JMѶZhwQ\۲r*JFRUMGn-3wGstmIsy=;#٪mu|ԩy}5z* {yI &TkWcܪw%3Zҟ*}גdMe И^%ڗ'F pxݤCZGUfX ):D"Bƹzn)+aC>+%.nM1lCtHe-Ak#Yv:0Mm-"k eoRYYYUe[J 9s4)jhr)"x3'ͻbD&YZӁ[k:GR>ݒ&zѸW9~ͮӰbl VŁ kXN &64(,ʱCC bWB&Mhy[\+&˅V  (\#tϠpʢўSπx28 kmmcLƨګ<%LhFߨD?{C6%-2&{cP+|ėl/X1Ʋ3y?\wٛ0k &#Mt5_˗I> ]IEDoHY k"jh(5B^h ©`OFH)ܕS:4V}[{ꐌ")_[0[ύv9g2{+>]G)08 Sz گJzۋǕO{%ә_;\*Žg ?9_&;ʭV=U3zd{+cW^8&qՓ\0r Q i0!}Nm=Rsg|YBɳ<$*≫`B(1[G28j_n9tlcsrrM!f2Ҝ 9mxP;{ΑVHj߈KPN(qvT%g(%>U_ӳSu@?b|+-,2`Rd^0p")G!%/5 DPG\B U Ve7%+dǒ<ϸm)C1YPgK"4hDçEpcDSܗ;%/3/3/3/Llƣ*ފ4TC@L U2 AJE0TVAxzţ}8klZ + LXtF`dIǔp hdRA3BH ,W%Q=Lb+ f}l3^#O3q0Fr"_wxJht)(㭌Tܑ=hOژ0yn:y`!`,ڋ}[) JD ,y@GYQ}xEuFƐfmX0CP(;'J9k!Ղ޾C\V6@ccC0@#Bs+6 B 3 #j/ǒK.2aRnYSl1twPbIMؘIQ.ˤcBG!0|3JQ: CɌ"tR3.BGm d 0La]mk;AЩBSH1e~QGaE6tj.בzbJ&',q3bFGf 4)ذp]DpUww܃1 d- (Ae>WITrqŊnL4XDLNxt&Vz?M|0YGe#\8-TTY]j+7>Of܆ fOLW^f(f@UЁ͘1 l00wT 5b3&`Cl1y Zb3YwQmB 6.fLݶ֊r-JpEPPHа'"ԼbXcbdШ:<"zIfm)PJ%)xg0{ǩCCѠFLhц)DR BiLNU8b\ӄ*r R-bD$$Ӳ13'/<%aEhf?vK BHzex{[AjLdW2mׯ+Sfy1ŵNlpTf_|d=HX] &)!\LN7{~7y /柾YXy;. vvە*/?|3Pwַû0N}zWK̄s&X\[_$@)Jh*cqE`H dX~J5%jO#Qn$G$rOwHM/9ȁzo:H&4=uR:/t^fyٔfѽ2LLzgԁdA5IK'fDDHz}l4;z>Dc{Gc]#a"#IB` ip*uD!=>wS8+BuGv{%>))ߞc($S}\vOW7WR,u/RC"R}%9\RyT_~!@ TlY._-YTB8jl)t.m^xsNNR}vFmdkN\tčÓ[~Cps0(ìG_``9^& :^>%|C3Zc=;zJ8 Ǻ8nhnhlqvwn_x۔w]WlvQx =h ̱ǁMg.:QBb:RbM0ũ9buPʐKSyJ<RI !*CrqAWAv<3)FJQ4zÏ6̟a 8xKglkz x6H!RKy2KeS:@h? QkJ9Dc$UR1I!jWNC2$k)AR}8k'DoED#Lw A tM9v4"'_+H{A"+=yn>}5>R}hЃj6刬)/qE3sb{T`Ǜ(+^i?ggRho "a.F/o/yE_xכ6zx7Y}g *=Izo(Ӡ }Ӧ,w3O'*uM">ZԒ.?_σx8z50 }n >o{so ~x:zrT/6[F?<vHe&r]j:}%n}OYZMvcғ5~ ؁$+іLB0ϥ! qFMG?1,ØJ>Yi'A|z]~!>DeD?[P@_%R\&T4P7wQH|ԓV!-^$ 1vV!(ǀCtB9e͐1E> -B TL"wmm~98ؕ%@2g>u,k%9<?Ŗl[[dIVU_UX,V)qBX:<S6r( i#%kw/Jrpw4x^ʮ+XDE~7 & cQ6d4bNfhŒVuY"lZ><2.s}J=g*@#3qz ]yej l fCuNm}O f.ƅړJf ظx$R>(:cw0[Vo.هO5&r7~Vѹ3|Vge;6ex[ mU5}I~ioP5eO)h=. jhd alh(C-\QBԟoLn2 s49Xp*I}>qn@$5؟_V=oD.MwFTpYZt *&L K/2W0 o`l _a4sh מSfZrJMGiEˆ\_F,PT9^mi8ܹUਠ%ptN,P`| . `h`__`T E!RkQ$[:,:# %s뢔pAYxE&v^#fybԓ^ D,>|}Ẹ;lybԽ+xwUagicyS@",6NŞ5Gh]4.+Iz#uT͐ѕB܎X(HT2heJĔt};b BY]o)rf ̫͈ >ؠ38>Xh :MWsYzoѺuG6ZRSJ #jbqbb ,*l 7KsʺY;¬PN)V=ÜsSҚ u: T#a; 3F,6O[랽?XS\Mbbj+ǣx-Xm宍$2BX:E!\#SKkp^k& @5JH]>"ьX`N2YWL[NAE+AOf!Q40X9#w\ҡޠSIM+MLrgS \p6ֆ*8JR >(ڠS{ [LjA=rs=Nz*$>w?s|qKnAL`5e A>q MN.!ZZ&'a7Z "P<N XVx߼0$`$xĪЌ-?c>c 5U/"ܗ_`bXVL&]m8z YqO)<~ٕ!Ur=*KgNn`2۔|O0 ;C<ڤDb'`J)9INq9b1O%X1wZ0B7%[{/\BP? ֈWbҲ9^7i4tDLR8 KA9ĬߏHm`67pFL1!ܪ͸Z#j*hgXzo{s*( r)cygPD5 :^*4 Js7 NmF+ "JQ fܧ@z eR4U!*;@'%DvN'$դ7; s4rx٩Sd쎵'tl&d7Йnr1g~S^юםḱݾgkftOl{Kz)m=t.d x_&yc~wOU./VX֟H'i}qu#E+>W练N%w?!CX"Cc*d<3oqf(JZ<) 1~E-g֘&L?kˆį$qF|h:Ճ}[>,6c|鿖[Yք(,!.3>P`i|O{-=ӏaUr ޒr͑)ٯuR,[&D'sFd`Ϻe2Nn-H+,"=]VHxRǃ4u;.0lպn}[rO)>BBW&6 jY94(=wb{ .mēֆwfNfQH:ƺތ]#'^fdSZ%=WnJ \8 >Y!NltjsvўovǯݹtĕceMY޽W#f|ݔ }{+-0[lRe!x&JSĆ,0Hpj|,OAy"p{O,gle)Rs@!_qς${>9 R&j]椊 l*[PJ`$c8LQzqZcm6Q SvO9aUi A_t{y'4]Ԛz[#DJ {}g$="‘!B BdV4cߙ;Õ;t)Sxt V}Wa!,7 }h>{+M4AVbLbq15tV&Bw=tjE:L[?]tBߪtW:Q#*B1!X0>PM:!XsJ;}!o|J ]PXoT@B -8uX[AY?S. _`2rD5![`z^89,3T"h8:1"s|9㯢&T1Y-s7}@1R=s1M.O~ezȵyz֋gr+|?[=3L iDxASG8wt$a CL8bR<7-i\T#Ep1J5 dƵMN$0ժ/aDݖ H^ 4_EMȈI0JrT %i.&X،Z~Qf ͉\¦(R@]ƘpI #BJש Gb0Ad#%oSQx-R[iNSPdoô=ָ=P4 =ے `a 楣#Kn A[/GR8+c"bG [wz.bی\B3Ԛv|MWbE~=~ұ]lN'p:U#f_9 UBP1C%82ТB 9 AyaLfW&fLd3Y{$U*M[$[o{6B {$TG! };׾2!#/iOjzcm.dI:]NPǹJBѿ4 `^ Q Q Q U5S4YeVc[KyJyAD$I|@Yפ#L XwcJW+<E B ፠8VV BNLXk֐Rvq )}}S6/ϻ):FL T9hYPHD 4p\:) rQ׎ #3; ,f:Ohu#P^Vgs.j:z4u*QcFbQ0a+=Syf|3 ,'D DԲ9^5Hq+i^DjQ3'~GOxmE`*f΃gPC t, an`yo,~:nkC ?߭b=m͆ ihlVadW 16$ fx (8ve)C?rezungYa Q~{}$l;t`:%]~RW3pخi;֌ϼ'$7މ8Z[ ;=c)Irb{<d9YiH(68xey\>潇ی4ħ@=_S?{ӜJyjE- ցiPNn|&+Hw%[Xs7d86 E$G/^lqàf0؝NK7%ly%ՃHVZ~lirLlDKIC0Mb9:v+qLL+0F(WQ4h^EѼ*f $Bq <DHf&A.#3@,'Z$w{Zh`iBR!df)JgNe!,਋^qficd8+FiB:ha-v73 ݾ1jG BÍ8CݽXFiLReG8U# 3bֶ_/,M "ca0A ka_1$d0B~0§JHM&;b4u!hRg}!;6LE`#\M[ 8*"\FkJIYHC4EҤ3۽ *;NK w w:bgf)[L3E HlJ $kW`7@<&h|ڸBT-C7p"~7y8m3p\m5&ɳ|i 'J,ZkaT{-ywOv0ð&{bXxw~IOaeơN&C8(μP!q$Yu|Kٳ̍R{)밂7_& Ub[uOLwI?e$37.}<ٶJkyj &u^궞;?&Hf3%\gv.lZfC]&y6oP*՛iM R= eZRFS<")9=k E4XSTĤsaw<}e5nlU1WcV'A* RS'xD ڜ>W-q|~ĘaҁO hm.Xj. ҁ18bܚ> W@t劵eyO8+?!W\Z w{+q sXAk5;\hj%NHU<KUqB~PZ0l3*qZ@G{C-[2_໗7—w_1#D@ f)gf@f@XQSҞ^c7M/*uL4c x34s Kx9&'ާZuj%=y s.`HG_ zi%ՙ"OV,ӊƧ<"B^NIyӾιHm kRR>@PTǨ^euK_ẾsKP4@Z4@޴N'I~D 4u$|O^Me.^qjL}E†\a˘F&w[/JH*7c'_(}+(-_ 5"%R>B:K>/62؝h@sOGwhPxHp6Ygfv@xG*]Wﺊ~UZ^Nٓ8d/D=`,)3H1pqos̓tJk?|.x74 [7GԽѻb̀!6=t7@ui^s0K76\,Ěۅpv ~Ak}nzQ1j|p=p)[G`g(ΰ6c+зeƒ@(+6G{1OKKgwc.QZɁVl2GຠWhhpN-BԘB,~h cA~ЙCH?L%}"~@2qÌ|Tgӏo @s?] DK5? ]w+ =|d9,;9ӝsq;/q f9goM}::HN515r~'vp;q+Aƿao  jIjӹ BaJN %zJH*k"sƒ a72r``= tZ.%y \xy- igurJX/; .R?F#ߣ:pZ[|*IRvqTh#[-|CbL 6<0L2rs[Mr줎V5"/=^4E>eo&_Mo:T*D1ğc 06XKu o8snYp4ZVPjA5GWSvdit|z.O$ aBr 1sSY]8"H.;O1VV_f1 Er"&_L"pg2K8pYNenR_.ޭ'qb~x=w?+^h6}<-uJ]O0G/c{%kik%/g` ''+g kFBN\Dkɔ&ԛMQ V$ѩyJx4OuVwڭ 9q#S4ݷvT>hN{Tns7L򃃘XO/#se )$%.>|-k&77RA7jګѫKxگܹ'VcX a8bd,76mYvrX}I Jx",`e #\H x7N^LG]bKsd*R9JV>a='$a/P?`Jm^/. Ɍ6y&lP.b{EM KP~o<6~Oګ;|q`NO씝WZJw["۪SWMf-l%hbh;żcI0G!UI`@DUVS Ui>޲4UyW,UދOwd4U]+zjVywŶ9N=x·6Q㙀xEH"XO\V]cG2+ˉdzi"tT7&ON/S+Lx.cۅ3,r.T2۳x{y{O݆iut2Ŕ-gdH!5x"~7|}a RBDpm ܘ2;Zl0n$[ػRcS^39hi|Y>f?̧?<5ؠiZ=J7ŶB)~gjv6]ᑖ*eAm-׳:bJ+aL|[آwzSXZÝr6Y°Bebhsg4KfoxSfNq-ou()qd[+QOYR\лKpIOy,'ZRb/Cʮթ%8 zȪ@FHҪF2q "1s\wh>uE\AW-H)V0,ZIywGgkTP Ic;"Wuf0&s@sBhUL۠"i[ET*JGѣYˣul/M)o8LM:CHBT2x[G≫!pãHS,aGcȷƈa% .J!(m Q@MHo:/R E x[ &IpH 4N_őqsy 3>\]hDR,x?I[e3bNjwO_Q|/7 rA6cߊo?>=vi 1<4#}3G?]OKA{ȈC;ό~]7dMf }d2+^~ßX 4Jk a 4chdƬZD5xyq[*Mkp7>*H4iy叹v˘\jz y {1)eZҚƱZ[pJu{78Ҥ"Ve2 |S|i礀Pu&9-Apygy{vۘ Q#r:JD.y#oR[[ވj +,bSRO oVԭ u'.d)!7%ZtoOng`7ݬwʛ64]█S⦌6qc"l'q#02F,zC.,*]mcd)m@X]KL#84Q3^ ŵ Nޙ@0XE,65k܆5 4#{C(H`z mmjD1رmZK+:t͘$N4檢- -Ic*E2"1 .#H %js8CVB%D"ryJ^$\}0/!\r!)f%s8L_!E T"ۇcHŖв_{iU"O9l4ml,zlT%Ѥ̑ 4L6]KwW29PQ0 )RHnʅj̆~hF(])Ty J 5^x9l#UI1 : $:5m\coz1tAE:َwez$hhR{㼨yH+`.R~=_w8"y>9;caN0i] 0~]4Xg?IEttttҖv.UU ujɵ5W*z11B` Bȫ]/**UEi-^Wzf讜 2:@UJFJJ&i`򄕰[ բb^÷Ah Ij,$i1n |DDv$,|3)5/{ngkt{wB؉)2n&!1(MS{[\Z()f& e4BM]!kR;tM YVX3Qd?T{y \G+Z:тy\P Pu׳V7pAz:KSx>$:w HF`j=J dqF[Ӓ՞8zDUg=OƈF}Uόyf:[+?՟o=$Y|3ی\Y/P>RZOe;TզY`x;CGλ㉸3_ C ߸DSմzZ &iL2A;Ff4JPY9cMe<TX:b uW% "EEҗJWlJC[[/&=,u8JJF#}=D&w _e/c:4Ku@bѯ[ᬿ^7 1 3줩S5mDBBVI&xo]J:8CJC K'i^X՝ AVu1Fu.DUQ+.E4o5dUNQ5˕sK[:6in{5jngqJ$ˍ@{w'ò>{{RN?uqKý#;YkO>"mt&q/g$3:}߿{G߃Ex7i]_Ҹt|NLJFUwˇ]r /$7ӫ#]OKĚex FY:aj$#4)-l:5qbc`^t_g3{DT\gr/QF.w,8iONH~d]?f=qeĸד/nFL`nTqrokcb 7# p m$vBWr7j ǖ k xsCp~trzq ׈fܓ<5n|{ns]8~?r7X'  0::)ǻϷc0G|H?G?g8g9yi|d׏\?]?,`!(,CdgTz(2dZ7jvt7HaFC= Cj^| rRy9;7W!$?Gl%Ugd`̖>s [I{>^bюwyF /#ssn"{99BFoW>~fVH%xߗ0;L.a>{)&!Ad!Ogsmҿ<,kMīfVSz8k?huB܏luㄱG:7:qŴNY{pF]8 9J~BЖ ,WZt^xC,J$?I$I$I$I-$p_: DUJ!Jj : J܃0x$S<7 EEI/?X ~q蚵C'2Jѡ҉Sשj9h\]q :D^e{I]ͫމ,y'G dʿ=7ҶKWah^d&K̤we&-R*D3R:0Is>CeGgv2J.d eܝ l><:Vܟ2Y1ߣԍ=?̠MϬ$`̘2J3S?%tق45xvCJVC U&PB5)@/tCbJmI|2=J򔜧vtZB`L"Ek+4ab^9ER3PtjmcJnP],v"+6l,hGJ@rĭ>:[ab|$~}~FZdw]Oړӿ~HpEap;?V%⒖$sYtfڵVcxio -F2{[QTH\Ϧ8:DC[':zk56Y om+4q$ҷUT,h,9wzM8´_%`~ͮn"}bI,_Wr|YWkY.ڎ5Gpvy8@rqo"8f _t"0 TM*X_19ֵ!M䙯VB_sFIo|T93 7[0HvDtE'5JCy2FSr&unXj]]0Nt!œifÁBZSx!) -CN!^ܜg9%H'`PRI[)+ ԗCt#ǕE. RkG葨.!X0!:_}釦B±h^E1Vڣłޥ}OE6Ն*NtqakE+/c%ЉkKA *VI>r= r=#XZedʚ_a˼lQ~Llg^ېOBUa 5Pߏ>'thO{'PeOEoFi-N~9C~J8 V=stDXLF?h42 llEv\\S35&]~_O.@V5g779ξ f %YMiN~.{IfY!쓽=Uސ)G%OIQWSdu~?=f/Տƍ}tmt:y2:P!aKz#n~Ѕ"Y~(a#NjHr [  Yhń!LםGkO A`:'! 끍g$thT^A# g,#vZ"%کU?@;\9cIR6PЌ`SQ#CBI1ohH@R"`j9}#e:UL_N_O(_?nAa9!FftzyITvv]q%&fey;ӬuN5.+uo'~{v%֔cnd^Q Q%%TMA 5~^+yzpjΕᤨҖbS+jzs20Dbx U^QcmL0SyJaGQ+*#7l*Ei yA;M~6N7wwo\,79;)po9rqaf/' 3yD}p J"Rw|Vuu]Ycw0ɻ«wor>n94D3񼮬Nqh9QeO6tr9Bμ5k( fJ ZRjI=o"T cS&C"RR:TA4rM׼NDQ6F[T&jf#zԲGVIB0^/&k ~ CYoM%zwhh-<Ɋ9LKx1~|<<aͷۭg>N#%ɒ^;qƒIsUŃ3K9*9MPJ_ Ko0R8&S!}s3\-ydLM :MXGgj l;t!JM:#b !&6\'äQ2[,t1>%ԙCIlN% KJ*!R I9MTއtR=%z^cXsVN/pS~7]81)i:yE/p,R2T:*% =QiwZ Î?.2 >⮒Ԉ*SQ*} .&P RE%"Se gı_Q \$%h\H$,ݥA֧!E-_ljqިniJ-MTA0C[S1H#➒Hd qI U!jʉ~U(E$!E $i,`b$5EvIeTh.AY P7;#.j}rFP;?#YEG DK&A"GnD楷9FA7"j_–# *ёmS!R\~io"u2hEAE/P>\%Q<8/TCi#=h̫]zN[vЯiޕNzoOY3U>8|?.||ztnFOZpegϯ/84r"nގtd=OAg{5Q&(۴w[;7\;kiiVL},8]?ѿ1_o2ҕ.1J^- 2)#y*ĽBPKU}YzfGV]lCFK2VB5Bsc[o UI7%GĿ7YֱXpswM5UJ8<`k)6EeFMlipuy+ ԙ3C; ufh Mj pg[3̠$]-pGiPjg R?ʸG=Y<*^(1$ DRVIgT 5 WDE|ڋ?LwEFmj՝_h]^fE :N5= 5N?3l` f %F@g p,"`^H"Y#컋Bgvھ DZ{j B \̡֥Y$BhWjF4`&M\)$ǝP6AY8k9^&8 G]Llr)#Z5l!Z &jMQhͿv#1 X?;<8HAɈ@=ۘu@u Jns>U= K,}=J6hs%eJqL**Lƀ1ǪR\B*uj36,{9DHMl-h mLodtM"e]w}ڐu#kd[ҩa@ @[{,%ՔqlzHXϫ4Vw7!768o^ym܅H.{K[ \ 90[AbC Np+vuEJP]h$%0}s#-PEޟp4$?3ZF\Ҽ) ,^GƳVj;6I@#位آlJAF~[f1ťޜ!1Q&P{ʔ_RIU$@d=)xv\P==ci%՞u  |Hi&fpktyC^9/I7,\G7[ʬIv=ҦƁ)TSpz(Vquq#8z"0*׀|F #j\ןXfuُP)i7h",]`v]gF=$)>p?}UVdn'י0-Ӕ-s:!xr/U#,go]"Z q^I74m]&فc*j+R0R*VW[f>Lc ?ss'(6:bU5郶Ek )mR3UI7r*zn{@1jDB41Z+kmkMS4KhWЪ}eؤAd.M};sH@lRߤށ3Ҕc0Jm eܦh0d`O<5}QUA=A$ (ɚ\aӕrPLg; *zAhމM!F93Q\fH}H;_\7I$e}P ^k zBI(I^7y5Y_9L;a益JRTɋkA85]+VOߞ|cV}RC}P6Oߞ|/EeT~8Ou} ݃yW:O#yphbls; DŽY`|_iqvPJIU%QKӓ׾ز-=|rCiA*;%VHFWKA?jb{̰ЍO2.+5qqgXsx#4օJjvrZtLzZL9F:e_)29WdN@-uۻ_he ?YyU}UPD280gTɗv,V}dZ x)Z,zME%iܶAmܶAm76vʅ9F̲"[DF NG8jNn4k m }Ѡ.C_޵h3SoB*A>=o4'u7]wRd*"YҵԤ skm  'Eӡ9:$InHN4GTB&`W&ɟ\vmQ 2]KĹ6}q-*PHyHPb ,hXhۇ:VLyw9\A޻UKJV ;ٽv s<:n`iS}FV/Ϥ9I0T8zƩή!2Dc7MNWUnҌ4ou&z m掞*_xSWg QuιJPCcLgzP4?MqgRc1T\mCc\@U|ixƲWCtrA6kT?Y2;~H[Fw1+ӿKlY4dqjB/M4`\eLRy-[I5&Ame Acd8V6={]Ƭb=J V7SislB57dOZ}U@ta1f2>e*-MDΖotp3cm?LzrӁ!*슩|3V+c.W~f״?}{]Ӟ̬kڳ[$(5X:q&ɡ5=D(EloMjB7G}*IbqI**Q rѡ +N+Rņ#[b1ku{@:!=7Oۑ,PvHs{8 ]FkC'gP.y;:y|?!Hڞ48+Q3=&H2M?vVpNBZީ3f|Őq&=q  irي ;^v,ϑHٖxBC;1_wQ=!+[je]FWX7`emYJ˄ϟ>&q:(lNѵ-ϩ\O^As:^2RJrX})+/B=ǜYQT Kk/-jS~|'K]܉0ȥWΨ+s:LImӯQy0DezY-H0-'UM̂mmmv(@༒#g piZ~z31y##C&>:*tDPi3gvyj!wg?,x2hTqjhg&Ac=G;#NlT2Pkh]kPmAQk;o,W;ν8j.[io.7nKk"FY]0weθ1a]f궽mn+UjMSe"*M]2셈+4TU]+b&F{bUޅC6 Z-iEJ~kY‡j99OCmb`m^^V-G^ {cEBlL% ṮBظP;B)-dNY2x "@UKAY'0d^.=NEl++iDrUa 4:yC8ff=e%"Bcs !Da R(x_x5>vNAq3}ܵ1d5zB>wSX~(~3{L&[] N^JvdIlZIG;ϴ*<؝aޭa+gsTTT'cv>ln*B1B #qP;~*`7k|QL.NvG&4CL-+B4aSեV4vTG}-->T6:g4&uQ9cNN5ʸd%68%:`ofmLJ `A!,ٹ|R&1`Nygw8,}nX)ʫ5_0HfDzՂ} ksT=P4GlBs4kVzr9.rS$Ps|'YA6k" :ˇ}eiZOI#zM#5yrkK#$H45z*ST[ߛmƠrTP{|r_<9BFwJgP JLTӣ%ӭmo׶oO k[{v<+yb ״[@ڮu4zsIZ5m1Ն(|xQtn1=f(:zQG?eXI-%JrRT އS6zf'jgӟ1(l/Y3nF΄B%2 ܱ݀ zQQ#_ViG DRJ/@j^HZB䶭NbgΓHszmbӤ$'݊farJ!3.2] &G1~]Ёw\>T̹H:Ӵ^&Z[n6o,V*΋mFU-LYBEɮ%x)ٹl4pdg$㍮z KD2eZ\lO!ؽ46"9kƤr<9I|2'{(Yq.3 lK`Fn+s2FviFyCd4wآL 'ʌ:(#JYC[hw1nz %<ޏhx,L ]Ysu.-8~>] j˩9ei*מs-ѡ^nVh 5M}ぜ7;L0S!Oh"mE:.0TqȢv)0s4i8Y}!W:a>ӲP̘i?!>/&ct lZN.!=C ZO r]kGs 7Mzvs躰Bu=c'm\s`6((W0Xm4& KސdI~K2usWD˶??} "WM3ROw딋uxnV_友FO7mmV $OF+ lwOwg[`8E?];z0 ;yDSAS}S؅+cӃYߍ}DcF Uޑ0Rg+n xi&0$܇Q!.(LòP >t7'7?%8 ;gЙjh B{d_HD*ĐIzn~-yݔlȫ%˸U7I+{7I+{^5ژe0"*2aJ(QےCQ /=sB8f=O9 *`vVUۆZeUm|wegyu=g7gǽ-; QSѽ-BםmES+=$$(ci/Ubٓ{7nwXair>tmʩO*i37q|#{?XZVs7D|'}%^]4~^?~ߦC>!Y:ňN E=m8V9nx"2LH3]Ms@5tY%] fWKn 7#Q#JR{%U KO$-?DN=x;H1ȝ֭+ oojy+EfGŘp)n^W=XrHiFT|!qTCOUr>f#_B׳,zIJ )h/MPC#uI6>ES [\Ɛ+83cĽ2s_:j{xIIIIUW{p^X0Hp (Qmmمe 3c*B XCˆвYпl+㓲NDoc ~𷱳[X#x,M0 W Q"U 5OJInD`mP=k}-"BW(Xҡ_J%)$%46-ItmbBm6JI ,hWh1)! 2SbL 1$ђOaaDRQAIG%Y[R,z}H:h{Bc8m Ub"8.nЂ_%s0{ nC6hn^hie՚oF!jkͭ6bvqHkȶZ36Zj)n&SfЈg˶ "s}v̸T4ZoKm*թb֥rn_fR|ӾKлTz~7=r~}`lYFe.Yz‹NnRku[nRj :Gc^5'B9WrŘ"ش{;ּU|-`Ǫz*5ٹA>;|\޸yv.ܳsntrv іNh+uNe]'#lh,v~oPV3p ?䷟!*{^\hm^uR{3Ls/mk%A"(&ACXpI4+D( 2mtp%EA!Ҕg+|];}NRAx%e:OmR]/.zXVc}֘ZȉrF twJ7{Ns Nm]&ҽh]f&~)dW7pq7[9hGK9*DT+H& *~ҫmGuo812i6q>X3SH7X % 5C3[2#E6=.jVξ5ȲʢkKoմzwWw,je_W9s,SB۴!P æc ;?Ȳ-m CKw To? ʁ`fթYY@f>oE);.f+u;~Ǖ,126艤D6qbšjՆrwt4yKF`Aӑ?FY*i`y#jnY4W\:-Pu;9t@0LAX-|k:@ݵBAdsw R؇ZO!K84(ǒ()oq.L0꠬xRfq 2mʊ^1SRj[=[_^/!|Ho|f7֯ MC`߱vDc` ?RCہ|QXEز!K<.Q7=;?V,1k*ACƼp`e &25BQLur.ɩ d 4FʣvI4ԕ *?(D)18aTaD(ݐ% ?[SQUs3_8awvz'iwvz'՝jtEMP(E*2 u4Fm&mp&pzۆZNo*t*rXkfZp=>I㓴=>Iv˒[c^8n KPA*v ^NSp#dQ8cDvlvl|g6i9T{7_PP_I F')`tR F,K'O@š`RdpF. 0 d,#l}nP+}n~L}VcTa)Y*餉LF^[5:[E)SJg3e~@5o6]-Sraz-]1'10g6j$2{DvZ1gQ!ƶ 4wG3"UPsr!texUzFU&rf!"2JTdUAmI XY Zv0jOfQYA>@ô\"wZE.qQ0R OkLAn"hdU$_)JXtݩ>ߺTudYd1?x{I[W}1k|GLJO+ MzqEthCFN)Ƿ:}ƇSOw.lquwAn:SQBKO>S(1fh8]~5/Ea̬k"Cg$3ҷQG'J.%cH1Ō4yÂ2`@k` [n8&Hr IQLFGu WLVq JF_=eZ!)hF,A&#'8BSz2q20f<2ZAnQB& /慑's!-l))&OڷU &!@6+x2Q "❨(+Em|fqFŝֈP5AýN]ȔimZ|ߑcE>1Jii7[Ҍ& jb0BuvR3cS,P4O־fYbq1(u(0M$lEhI]ulx.Yɥj 9ZUcB6QE4xj.Fv`uSU|DXϯTE;IA9담,it"r>2؂wγ;01*"j̨)񝌺_u!gd^ tiߥ/v q_We^yհZ@nQA{CT`:O5y]6&%%Lc9/{㓿O ?;>tywv1]p=Y~wyGO 4Io)s>)hLFځ..>E+t^:)QB\^j0}==D#nOn~N?g;ѕEy4S|{ X.{/~]misƴ"P1 0>>_NMO.K=oGJF8-嫞xD90+tG( l:Fi \3qp̃RzI tjc7 CA 8 ?@$T$(;m.o/ Bi4񶍶4F[p}y6*f-\wux8 !{QpQB*x1c 1Z}ò|s-^n厐} bI0U̍WW>e'iӼ'ڎ.z+|sȻPh=& <vх*G?9,<%1:ձ̍8 E} ]9WIe i!y2: 92 a]گ- GWXswmz ygрffYdٜ$0ؼeG'俟bVwb 3X-6UcX}I$U 5D0̤b'h2A1g:71|.Uf1cV*I͟jJ6K'rK *ΫJvK*$zHLT(ʹNҢ8{'ոx }Z0SLkrx74ʉ8\QJQ׉TJgk2DI'Dj  J~|0*iq>_Yw#ZhvOuH/օ|7i+O}n:3/(t'7af2yL \{;{{ۙ'nCLJt-7%:  \;ݠ/u/gzUw@QW9,+=c bDr{F9jQamxc˳Ə(dSYȒJ _:EӳGVh Uljky5gIjR3K/εlGGh,{'Nwq{\9[H?4m_.ⅾFAu H/чے|H9 ޣ6LtW,f$pޖw< -+āʴ"? 2}^~-MAw# `i bc~KO46O v۵Mkv)/VvY)Pz6 su[Ngk5P3*ڳg-9e2?E[LK>=4"ݣ~pD w"П{{ߢz_CVtW>ݞ)L:nj֥e(r>'\3u`[Nev(zR/`,%BE03+`J},%';/`{ڧXvm;^^>b &6f鴖nq#`%MJ9s RmlD4%d:uOz3+-l 2_k"puT5( H}j=]o@vFB41{eJ6F_tR™|sPakHeПpSNW!]&) ERBM[97gRIR ;J{JC߸oPy^M2kѧR6}"C>7|hLvDHq38[QWqs^Ut!ZESvkh(`r,;ּx2aRO,sMgcAT_: IѦ׊w0t|Z$\/K[d$QjsN:yt:uQ'g-Lc@-1!z3ay,`BF2AE`>v~JKu9ˆݙ+NWrP>~+W &Ęwϻ"G8oQ?hz,'h<-vnwRCKo[/ Ϩvnxp0y=Gg%S}K )pϐq8Ķ6SlhOj 4}Fgnj1uICFsVBΥuO9TPiƼRT(iZWJ9:P-B tXK̷.W8H7,V!>saU9ڴl }6C?Q H7 $THZ㫖;kl.f`8 #D%kκe@f3??'0.&t&NUyAW ^^\ h}%R&ˎ=t>O!Z_8N#1 Ob$ٜMF;4UwLM_9m&.YA|ѳr4_e s=٫Ɩg!oQ3t!eRCB  Fk@ ZO,/2 m,q^Ide(N +z*FsJtUZAIk?/%=}B)IqU=v) t Ow7x"$,>+ꚅJ=U-.PW]|u`8wљ}'HKPR="d3aZ4.Ņ}~Isო~m;緪{W.< $VecxRlm.xN T(px}IU+]+Nr7+69M&{0$c^v1-ؿ|J(j%,> ΋V^K(IZE$(mt?>(S @`Ib'0\\t ӗ6^ W҄jvv{U$m)y(.u)034hP.LxRH)4C1KPbpp.VG5J[Hf$>|D_S2 |{)TGw~(`?;)7ox.z|]w/|~('ꧼo8dKo)ݟvWΙ8#xfui&^gb }2NdԀZe SA3`Ʃ}Q#u_:K /Ly H3l"C͌`3k)Oǵ&ڠd$CkpS.@7J`2Q $*+C5,j &j_!6L+`}^J&w6(<њR'bJh"%T"lB ,Ө!PBp%rňZPWT<?:a2R`e J" XM%y$*oM@Ǭ2NUDQ WWL̎t^Bzb}fȩqF=I3M4f^ ƚtCt6r!Y\O8pMG8#g!ꅵE>4 (xk۵YЍ uO.cse=Ki^^Ce5^N I$68 X;Ռ WJur('sb\5\PRP h}h&D3Bp~:Rg1ȣG Ҵ*VuO[8AWp89#,PP$xp$-N#9rTY CbL?H eLR 2K|_Hj\BrΉNM|a8;\|:XH3dU&W^Z(@+T/ ۲9E苐'O蚴NcF2%C:9E. א 31WCQf\1rFfN 3T U!8T"z7T=dt ЙVBh4DD2YXnm1% Dd&H<:&L(BkN!M 2'\r9K[N: wz;Yg T`gWD~A,'rE|ᚚ{%WH׀%}dMY=2YAyµA mqM2)]"V !U 8ɮ i(R.;J+oef"V GБPp,zd`C*(䛃1L>I*ct⛩ 4}͈E+[|Ǧf\k N/,IJ ҿYZKN@T LUb F]JK2cOeSQ'JKVgX]9MbKvƋlDNPvPG7)$,U F!N]J+}َdl!# TAZTynSt55_@ʻ{x. 2wR$\vF'O2疥 +).܄|̅ Aݲ$Դ?1L:v<|!;>q [Ym"/Ep)6g)|5$ wqRxF9m)3 MbVevO!A@e5@Ut벷|ٻ6cW Bb%B_mf%Q({`{4HiP3w ["UuW}@-l#yFV5nZÏ(Ic)Jk$(O?̍Ƃd/$[jYuރw@q)E'),*)KjUי4W8|wSW!_LOGS_7Ʈ>i0vw IzD7xv?KbrEq_9'N0!yP[O8ZynmY)j9aYsØ}>'{goyǰƁazk|PO?0yWo첱 ,a{[RcbNnMfcY FPwLMM@ f7A;HH8TWA ;G\: ;D4{l#뽢Π;tO|H,J [6zɦ J_0Z  'Nd;esįyNj03drkhiY嵯g~'X{f}> 4TGޣӳapF~;:;Nmϯ/x[a/f/tOv`; 6iF?//}g/ւa=D#И1Zg#0~k.=0ͰG!k~?{!-v@^YqzqxVmAt%Ӫy׼ J]-Q"w$֡&ۮ֊6~SMA$JGy HOmG{(Zm[=#m۞<3  d=D"wmU!󝧅|\n5vOA-+?h‘hJY7lTQyiicO OD5[͘~!>.3egO ;[,rfK\)xTD &OW.LWy B&2?K>Ua} ܗwھS{$@kvE=R6?ͪwf_R&/Ilk5x<`GeF5Z¶\c40qV7Zأ탎:.>}"Z)uT$A('9dT٥E@ \ $HlE~ۗ?]g}$ 2E^F9Zs$hzqpŅ88{=-^=(UOf^w##͗t&F:z dY$V.&)2c ,Z&G=Wg7g4@9 `)OFD8k RCNAi`1qPQl#ʳAs*Im 18=\KK 9cqI$"=:[y(ɼvG)ʠ[gp"|dPD<3mB59NoH)-9yF6ҍŰB U&B*ᅉDM E,C9@6[RޢG< |K /ödXN9H2z AglvHIN:K{&L&o͖SvΨ 8 H&7\92&.9bmsl1+r`F!h-G;XHLVZ&scILʹ,HD$0oc "\$ʆ>{ Q 't[lţDS"R:w:E>Ej3BUFN^5v%&Aj=#h Agt(M,lP@d8jړi,퀳\v=$fIB6*R:CVgb1ꭈDGm3T- mm|6xW- bFի 6X5=^%Eh8Tꨨꡖ{ 6#}zC *S;f\_rF˘"6֛W=j)_PoL[C]위L!3}{#/`W4ڑwrK dn[Ƃ2tT4# @Jf~(v\GS7A Үkvym;zg{ »J^K\ ##;M^ªsk=HGȉ?70"w>.OΝe~@|5Kw:Oa.LetC?ǹ>rV5pl$:?;+[IтZO5kw{Υvꇤu-K]KkkV_s)nCAeE& ]KIuoعn_ݲD쓜Fikt%]3xv^?~t-^{FMNHl<傦4/#gz6:ƪZU9c^|ֿZ5t ΣYY&'E@!i.PB7Ou2\g Xc39]`Y<kTrݬMg5˥ֽk9%{j{?"?ԕ[%_}v/.{k5q-礀JiN_37;om_ ە`m44TTH[}ǧDry9|vJoPKJ_|X![T=Y,9G͓A5|B}n~QzW%5l~;|wkɝt4܏7਎VM5:`}zIZdOKM y rt1ߋ㾘-?$pƈ,E2z{Eeψ@lɺ,6rii Ȳv$6ȏUrUS`moXXG!{H42*nN4̊CU esɋ.NH B4md:E'HcWg^|ߊZtj}p]w.U`r8ӧy}I=AJkW.&) eΜgJJS䮷B"1F+Q9h1٤U6.yo[Q.-ѷIe٥2iAa;2x dKG w衕v+ !{Z9Xbɲ9vRmZ=kfElZcBM|,ho _SGB=&\R!dȎ'WO, BK)ZZuʷ90CUF&I O φ9B ,U$1Ot >a!řЇ4 )/׬?ɹlpI6>wɹa֯ܓO9M*`Ie^*msSC%iHCe}fF'_`ԧD&XHÏfP53h t^Eizǽ=t;)Zt9ǥV,Ob>1ByK_G#e{ ǯ?%L{og4VL#<[^ I]hG_h2VyQ&j$(^4uW֫8 |Ij's/.9'O)u/d=!l!=ýJf9'T]"B1r~oTitښ(ٝ6 `8ǟ.pj WNXq(]eo'L@$4b{7:;J~v1Y  ܏滠Vh7z }OSGu Sp4G0Ó}hs™~GFGOt:m<6s"'e^T3W{N'Oq7ɞ7#./{1Xz'L6@xXap1|S^7nzno/?y\o_}rq׭#ڿ_QT^b/*>ͰpjKV+b)G ًJbjܽie[3L%. VƂS~F&A6Wk/s-~NĽD3/ ,dsQۺG4Qu0~a)FSj^ڝ3Q |I74dtk)ۻhR<{s{>gc noߥ۟_?8ϣH(H]KM%p:B-Gh Lt0 {?>4Js;QLRi˼sժ}qSef;v=juh1ҼH+PU,ҷ-L\@+n9jؙMVJ1yPťxZ--|B AwP {T+]RCߢλ@&} v Z)*RIlRjQ  C^ ~0;}ʾf_BUXJE J~+&+{g4| R3d4jnj\A=6TJFqN8%;Gj2 +Wv5f!Ԇ,m_K-rCyh|zH5%Ai \lFk )y)qh[{T > g1df3hU;-p 6יj!/mƅNGP-zj PT-=y@Ar>|zrӰrH{%ih293V*cP'J!nm'Ceqo#D5]28l,CWs˔Z]7ԘPdax!W/<33[1}₱B?oUJWȰzr-8p &fǝ'rgr\  |M=8  754pc])[7":{xلlk8jB1+^*ET(Ad/DPX ((xGO>VS#{q#rs|e^ӕ+g/n<&v1yl rlNC>K(?ا->lآo >ih}旳@OZgmN,؆ B nϽ}c:þ1u}?-?-GO-Z+h}:t+DZ/zh`#/o_}Bº'яm`?q}:5q>qeO`ϣ̯h]Z/6]kA0tr-g.^̯\W+kWApa~"xElqa1E\ /\u ڃ0o Pp iB}}e?P nM+9b$jbv/#ڴCMYצ%jBvM96Qs}ǿԦjn96jA6 jesqب6ʘCM&Q@-K}N~7 >=S}Ű'jLXZكttҪ|{$ 95lalBU_2eL1DUX[ -onݼh3wU.3W3 )if#ŧɁoz ;my#%I̵ H[\ yEz:bk 0d* ̻q-uiEXZ-vZAR pW+^O Z9aZ/7HZAA-Z_- @`9!{`|p Z FMNfeM+96smZΡڴC-M6-PcIԦUj:+'QmֵY-?#$jm9L$jӃ smq96{P$jӣ1Bf8,`glv(0m-ZF:p_.UrZ(Kpn[{,)])dҬ ސUCc~ ng/ɲokϔch[E~E *ȱ2R„fC@hu(ja)H/`1( Dm6^IP傊R\a'@U @w"UU+3Q^ePIl A DpI ʡC"3Ț(d#Cjo惀6e9/#kMEn[9H@ QC s%2dAA1eg Yk(VCJk>Pr,*lEъMW`~$sc*|:bW²j6hoAږ݄ڻyrs}2jL +0R@;0{ŷwl-]-V,G$5-UZ pۂ8G⫭⁾֫j.}mjʴߍ]n1ۇu.7ZdCMMԺUvj˯Q/S zyj^-߇o4:jVIԺW Psov/2tsֽ.чћD{)56ZGjLMlbc$̐CmJ5 p}suT//{=M娆ӳV./?_~M?..R\.GQ.W ?|t~qV`S7xx~h ۀ~h7jC!ܟ|[XgytYo.K81﷜> bnmgJr=p&扐)g7;,։묲"g]c w=1*Z{Qk[:fV3ՄjBL [/ ƑK֗/Un4C?=}ӟoʦMNn_?FkO} LgР5H&6f\nλ -wYcgapߥAp ܝ=swAnwjduYb8 穬r7WN"P/G0v׉%4]XfrKP79KbUJkW {]CΈߴt<5{?A=07i,1qBr;pzаe=v|=3qȐAO+|j4!<ܳQ~GlZڥg;?c??6nwH^ yqj?+v~[w#[RF酼5G}sO_s/>mKCe^*eU#ޒg};ڷ/CT蘖nbЂ=W~Y.۹|ݦygƆջ[t&W((ΞSAY~pIh2- ow8Jx-5%>㣫T>Ø俟yaECs˘#w0G7O61u^I-~wVBwr_GM_Ǧc/i|bwO%yd)J+^iW**VRFa'w~y/ ~fg7x.ֱ$9) a^*JR[+mMUMuuZFZF ]N~}EYۚ,"˪B9 qoKʉhLjCBjaGIRmmЉMFM yffE+]1bE9E$bB'Cu T-Ș HyYI`C.B ncz-tdGQCp5cvD8k_ JJLɐ)jlK}C*\Y 7-BmZݡB퀾:34mmp8[usYNnFo|w7?׋?W~x͑?zwuYu*R?i4O'Co{ѶBxz~fVmgJB,0J!]m x h۟qq櫓NZ:idwotFe=mw(g{]&h;u{^e1e⩽4c>7K3{iV m*g)66h\>6;ۺh;|!CN.p5 {4-)͌文 ޶ј= `G4Q?lf==0iju|><-5LG&fw@L,t:ts]֣^ػܸx;ؒyR̀!'80!L@x1d.΄|BxW|!C. !CNJPBjewLxjeӃeq0dc`}Nv[XlN6d14G%'!fV`<˿9ZVVpܳ \xH 5c=\|CR?{Kq~]}ڶE?8jz/:t;V )K]Jof[޾Z6<o.ߜ+!wZYJ]mQѳ\ˁPfAhf*G&e3ܳ->üћA6W\iOP֊#-m!I)IXDYEy1dۆ%$bVҞе4e!H$(IGb>[J3 =QF+*80`N'6!gs W4\FP =9T|"Ъz5@![$FEr:UUGv +s DYr6>eVlPol)E{!vVr(ȒTkcO 1YE7bBgfݓ_aڎ5~2A1lJQHbvIZ>m"Y Mk?1]Ͳ5݆p$R ؃h%\;̚f;v٩@JI''Ƶ/4D-y#Ϡ}PoA;/KF `=\9HhF@$: &OO?j\ů\ =j'l'G.L DNKa%h U؈Wg 2I̾BfX#ah5&%KJJ 0uN\0ec-L\T=RGI l\![D(YV@8XڗK6^DY !P4dcR}խ .Ę)UdJT3^NV[ibIEM).\3G93\%Ċn|Ii}6,03\֔X%'aa/GDJ&Q=JmAe_+uL)ڒHM|(> lG+a6J{Pz@Pv9ZEɈ"?qPB5H)=B>JLdO`0ɸ牪FAYcPȆtZ6> F9rՊI"H@CZU?Q EP)4H&d-D`5؝IEAdtQWT u )۱FuE@fɰkG@pcU +U{PNlpQEV;Vx*IeD. |191L6 ΀l$tKjpzo'FdZS-RJ9G"Aicu.DfjO! IʸttXdZxj^0E0#,RZ[g JWԽK쀸PjuS#!KW²Ҫj96KÓm9x%K7J}T߸/ ~f&i[M話1BsH{l}˃d(3(`} #h(V{./eԶk<@ԔEuZj%@Q8Cm ✧[MM}O݇zMiNjITUsJBDM"*Y+;~ws2Fx?t"*[R֊xAZS 1}ה,=vދVQ| <JZʥJ` $ }0^BX c5kɕQ)9<d bRILpmઑ?r)ĩVKM511%2U3 }ZP|7M*v0XN%I";?l̔)1`HuDQNӾ} $o$=!69䏏7Fe0np۬a꯹̖3[ Y Wnє3''b[*̖DVP'I*sm{ra!4㇪Ng%`;*ضcBiS~Vll3DzvwShh1p"v%s^ref6T%3[7uiZ(.CXg+V2ytR,zJژ)B3!"mEHcXyި!]ui:芳o!9{RPnŐe[Avg# =u!AKƈg1DglAqБZsq`-% iN܎}1'ޞz7?B$4[lo~MK^xeUsRg!"%RҺa4t:YXl2cЫg[[? ^ZײR-dnuW^nކyMNpQɺgq|H=|걶9VWw?ߜ+~쭛Fr:p2#/a6zg[wE2ŸW^58.׫7o߅d_6v0<&O?#Qr|胈 諨M*8dg|[Z_7:=}o-Ro?:[n'>,>),ó?7 w銧 cgg{2幁^9z蠇[FY|Rʧz'O嵋?p_yk-̑Dy,"ՓƁ7+N OnO;ݪ֥ؒWIoxo?whTHc#*[c{0?>?K8KDvuU/ZT-}\̆f"I7 }Bk؀!@N?痽ы?S'z% Hx 'Ϳ,yo[K:tېL۽mރ<)TZ*}IdЅ0/?Jvl4^Kn6.9݂4Xs .n\~·l:̟ V`^dc28@!%w`n~CjuyM >Usޜv-F=/Y@% IKbмt;SOh^f˶ a_8bɣ3e 2x9&ڼF`|z}ɤl5&8p1D{1BYq̓@]#K- (zKgXzp'ש^pIaZA$V@aDHs \ H'[k6°rFbm RB(Lp4Zr$Hcll@֑ "4$1v.f6|Gs: ǢMkK'7/ggGW;&|eW8xuw{K{q~o}Ln<w !(W!89")n½MPqOAD@l]SS~̀'%çwMk]7؋<}\M% +cԀR{|W_w2kl yЕ2Ɠ~U:s+Z]+Y }q  /Q^&ʋ>&3P@J>0<٨Xo2(\=QM BތGzWؿVxjtrI ?a z<#q{'q5E."  vy/  ;9Dfa|ipL>éu1&m?ǛkصAݻx#H\C+qQVUGkG[+r S8>ʹW9PQGqqnI}^ MV[_qrf}j>I8t,Hyؿ(??=|0qQ3Ny B>mtdwyW]m$9O2>*9G*r2QRjAhHC=ڐOFV̷>0!;8}Out mL]d nuTM2Ԁte1=NXR־̯cʨx6ʙ_z15WOrʰ'O9_\ H3xC Raf=%s.hŎ~I88RLQt! TޠH(4E=cZPi /Po٪*Bq)rէޛ H)$9Ϩ+R,TF{+Plϖ+@ nOy9/˾!L*0)Gn5k9ИOԀzD& F跣0Zjij]'|uX|>|7*`2X?kw$i{n$(hi5$_kϒ|/N : ȳ~Q/1b(g}L4ܾFܷˌ9*aTS~QֿH}.%{qc+<.|>lPRCIGٻ޶r$W|bQ$ S:;X1=eю햝tz߷(9lټ]9B$EbX{ؼԺK$q%DE=t=+=4Z=OkɊșLU`껋\j C_lz-F-bxέGdc)``K)hH0_-e𠢉 'UxF~Q, ~Mdn#@aT*m€t@ c 92ȌUQwCѨb0g22kgefAp)prƊ6Zm{!c}܆EL~5/t9݇է] RSY WMqC^Ԫ}1wOwjʞ0ɔ(mv9>ArQ[x}|Ms|bF"N~ZUQUwA#_>dgJAښ5~ ሾZeQm/F?D̘n'*Z]}p<)3R1۝%.@,㸖Lc֒ c]xٰ0T1̎Cՠc]I\}$ DVgV ]wt[1PdzfTN}:m(.`=fHvdGp iCslŖseH4^HXcUdrJ%ɢ" F B6"&s,bA. 9<mx1H( K.dyHQZ"Jk^wPfIuc;xiS>VK L=.Lpcj3. #P\2`- h+#Ek8joI]v{=Tw埒͖sui ,/ӂHiW>-}Z,XiӲo? ,.z^fʚRJ˵8.J99Z} ){/g?;wwb>kwU̺63`@A'"qfCj4͇Y&n9}bc^L|? ϼ^K{+\Q jƔ;l<}2p8' Z/6Qn`j]Ѽ~~ogȶhЏOV?X{ا6twKGQsh5 bӾ;W8>Fì\yi#WZ2< ˈF3wޛ[앻u&_f[Zh߰.Hi_l%ATKjvu\ԲݶZAjy˺%[%DJ(fOW^Ó9)x4 f3  saSė&`N 5Eȵ%)rl Ino'{>ݤŪRXnW=^\lO.V?2W>[}_?'zs;/%<1//f f~};+͸_՜zt~O*71jѝWѭ4?5:}?E~9Z.i8h8d3{Xj ^F~3v-}9䯋|}2⛕./>slw^!f;kC,{E7^u@ QZKU]wI~_i~L73w}}c!끔՟=fv݈ix݈8q0r{~Y|JDÙCVM@qwf(%UձjeMU]OGbߺi⚌#@yWV~C"H0OEЋ_?p#A[Enw6a(z1x(HS:[ b:~aX`w4ӻUegiIES6eow~닊Jo;Ȓ; EWV$0d;B2dZz`cGvWYEuW|v9iU9u´Pů},ԥLyrE[rqa2gݞ{WÛW?3ץ䬪mۻkΕ@qT/]Vd:>$e2ߛ?)_iqřgztMm}a\"HaJ^h =&#̋&"Hi+`]꤭ߵMڢ֌t2u%"gr@cBR@yYr3TޡrZ! /ɮI]>V nzBc#PjB[A {ƴ_JDXM%z4HyOGmٶe ]Ӓ ۽v@X3Sm$`1S7#&A5g?IMao)B.})>A\0znSݞVQ%ڰ˜$"OֿS PxO/OGgn%ԂQBy;=OֳSO$clzcL)-%F]=K728lW0þQ6/ -uV B=+\l>wX)زnZЮ ((8q(ĐvUy0kW"/Z&,ԱO¿}'o_,/.?.}Yv;u.6`m%vR>Oye Ȕ؟/9PKMȹeh};a=ڐd[ڨxafrv}݊8ؤj$tUiՁAUˢF ͧW9`YE/O @N9[vhw;Qoٟ 84Ϋ" :KwgI >=o{JlwBpݛ׷>!1l:JPwŊhI!ێx go} Ўdq-&cGv$mϹ|0K&ϋ`ڨ }bJO6)rJ395$5X|OY43wEUumQ7CT8aLz]MwHtqSPJ$ 36nn"Szmi̘rsjɲPLbhvZ: !P_&KC5ui΀nI\vT5Ǘ] QlvYH# k8in=4n,(r=KW%%m iF͜=;{}W2!3Ci%h7)t--%V/{(7g\S!B!Y[;N.̋ojp?,F.PㆨaA}6-gQ1A+,bZ#+g LH+zka$[gzG+`B˱qb2N]#u1 kAN?X.AEO7:z@m"tϩ}Aнj'poVнk A { W(7}keZ?:pWw=nw /zIPhѠiA!rM;w?Rڴ-ٔHQZCd#Ùጳ)9[on!O\ >jk n[ffT·%'X?7xjsH9vdQ?39 K qRS:.PcD<↚@DQNoPsPAqE\]WX~0LC=_;RYX;qU/z+5S߰v\'!˿vjxsJ#PENJdc/Ih ပ;-+ $oM/₪BMʿn&9A}e]gXP$loGM~I7E,֝ڍ/[#!|?b11C+`BpӌKyzߴ1FD-elDpӎ)O}K<y-[3iv&mpF̒dF̑!֮:]}NWN`3M;?;F_ͫWo{3z=OP@Lklm#( OH$:0^,P7RIqE1Btq2sfEJ!R"(S[c;潟p8ԙ2xY%!vW!ͫ9ajP*a+,/HhEcyY!K?FFb+l$qBunAfp@։ӉTKD#d6n `ƕCF0lZ&hZ#FMh]IdpcEr))lTAP-6AyAܾݵZ]$n<-PU׍ܮp_?(uG9KGD}Q_(E[^8Eca+V_MN0%)ܩ < amǃՅBtc$t!bA kyx7vO{?$PA`iOnbߺ z߶ԄTyyy +]aX>Σ*6 o_eW~T֛Ci\.r˃Sr uƚ2!1&\f ";Ogs@E X"ʼnxԊbr(yIc]eoo_J{ D6J;rOroHQ}9t-ye9qMK¼PdZX9t5~7Vs+mrKNG4y)s)d,4Ox cYTuhՐ-ޘUCnܤQ\p!?[S(IA#ul% L4KQL 2hy{Z-Hb9+ vPVra]\L`tbX$L 3Ju/Ve Xw8|Y艒tz[!qw~;Zg2)Vƞ~7zАǯcKJ瞏b%jJjuUR] RdCf?~6c`[.o{X̪%9jEbUЭswsK@,A^F{q9ƚ!L\2aB3Ec}qmgc7xVl9P`prY7tsh)ag_)FRn 獷tv|C%2)/O߫O GM_$q@t(ћJse]4w|-j.pɚf1[\+^hmhMK.B穛tmW$>kכP{wMY7Qn?n$&HߎiM1O^L ~QB ?ې Rt/(!ܟ-R^ѭ Z% ZfUb 7V2 6H`FC?{P[l)4̐:nhD}( 7vc}0Y nհe/!lѤbhRM] k44`OM;oΑBF>twD- c5nvY76#F*?.ȺaPFn1|Y7%k)Sp*AJAOU,߱fז'2 8,;FM~T:4<ɴBY CQ[ڠOt9. ]ٳZ~VGPH,!!mi>AO+*3~UaC|†\nsX_g31TgAj5u*b=u:հS>B`D۫V2^/+J<e1VjB~REy@u{l,&?3 Y/RP3)ӳM?K>9u(! B$ Ǥ̘.{Tqx^mH>C[ռ՗dH @}؈q"U<3m}񊴁! zXm1|JٜST"ki(UQG`f\:8e7wo~|:Vx@RVS@"}gֻzlT˛E|?8: $[5ѣ`!ɨjr?ޤm2QTLipl8^;Xs'(>@/xFNbZ1b2qB"5TY"5|s6_0:冈=k窶Zve90ۋt5鴢σ,TS!]7r1"xS#{`M]Fg< O-&Ła a՘1/]u.aWĊE^[SU$~kΠ VaKJO  yo[wyT"@<%G5\{8QT`1A< gթp@aIRnX;J: 9kNCZŚ"Ơx"B8Y< t)~iYQiWҲ- HOhՐSeG/V%|"~?:unM/b3Mʿn&{O?^̰Ja7/?|;z;0[[2֛oG?ՓN<8OuMRN.фjslF-A7ܫ2KZ=7['O";kt# oGWMKqkB&ݼs(?y}F[O^sUv ҥvSڰ%[凥KO*o!M#WJ[),;T|3*YVޡǣǂx Eڮ5夶+t ƽw8#q|deG+C(T|F_pHh5IIpp'd:WsXp. ,zPHhX;V{X`LSN*N$"*jrd-5RM@jA>{a)ճ8ho(.vګ>n~x}tͯ_zZ{f .ߋO^6/nd <1Fa$.%n٩Ks0qUհ<}3<.8Kj Aw|~ZW]{_Y_ϫU(XQ[.8_Y[}_G#so BUw|j՟y/tt|tv9Bc\?m06|Pƒ.ﶿ5naX.LMtIOkl#ʙΑۿ~߻hٲlƾ:w-ruwrvv16.fBs:{(z[ WԬ/o5we+NLi! ;Jb(0YL_h|@$S=c;%BOBCP{qF qd=74,6ʉ ;n뙵k=!6z@&*٧K:f9XoTvS?aE1<7XanKkջXv[U/guEUOz\6Ύt<7;TyKՀuIqݡ 6U2\X,땷~pkc6|%_ytVf@eyknxp-u=b:}IX~P4LT m9:u1}yԌhuYPa?,$颷̟[P+}OckD5FWY[q\8}/G+?>#.gs@}7?G\u?&~b -{I}~[pf}G,\6s9Ӭ2Wc^jZnۙ 7Y'4&#qIP'/MOT\SjL9zԩck`M4)*_~NL%QIPxtG'"%}L1E'(2y63"!#qDS?n!׶OǸ+D|q2-P*_FjK[Ϫ@h;`xA}}q':HFy3Qތd7(Ww@x<9`_IK'VsN"K$=:n rA%Uz|?Z%e}˽/%;r/FuzZIc+x۶_`GumFٮUۑ yo9;UXvXK%rZ\ְ: _,GWm*j 湃JS \ƁUG \\*p[[ <ƆZjɸhMj3Od[-ͶZ+%O۹TԚٛ@T@ nS)uj\b.+Q#Zێ 5嵚3E6$LbQ`Z1Fem8Au`kFY IeLT cIM5JѬ "@Fh@bsta@$*mXw[ QmnǪ,ﭙq\f(R8y&X P׺햇rG`fm cj]n1k%Si&paq2@yZq1fq6\⦞L46p:2kf < j]b(R W 4# (3 ?}MKmqq>3UV$A#FdKJkԡD"2p"bf"68,*BD &'-լ/lk^N11ܚ&w?ɶ+lL ±mR[ ZBH=YrLD /VbIJ"x9DBT%D$$ ZFV[53 ⬡PӝV4a|NCJTK02N$H)t H9uUc}6@Db\m:[kcQ颔8Hh*`%^;uƅq%<5ZJ`$/%z--"uLXcK3=*3ssq gz-uT:i&ߡC- !Cc[pB{WV黄`WaKZ!c_ScA:8Ëq$jYJ@ H!qМS Q@-pw>K3tK0Pdc 㒜 [{0^]p+81l+&;o Pfb-.1jE-@53vDP9k5T LHp.pa[)YbZ-ka[p5Ҙ(m=HR..f/k JJ&j0 BSMZ-ޒq2v-uZC&̓Q;6Hxưuٔ8Y|f<G $W;8oISB!Ւ>t8g9JՎx&.#Z2}H3 ˝F$ؑ:H]>"/֑iNwm;nHW ,vy c~Xbw0fܶ|뙯T.gSt&y 'z^k|рZLi{x#+=`o_U[s]'y+&>kmȣXkV_3iduLrJ91Ȇy2>ĴMhm3DAAu=vӼ_[A |'q8uPXI9yR" =Ѻgs:U<ݢޏ:O- xYfdɰX,DI! rYuf`Gع,1ELlNVb{PH:y!V-C ʕ/,|gA 3pY"QrY 0L f\QC)y({#43E2R6p^ y&CPu?J_u?J_GT1jϼ: "% #H02oBs4Q8ϺӃ`KOФRS?>zH *靪+/:0NH2J[u$ .tD }Ni% ZF%>!z4HNԙH5)A ~6 a-?Sss#-'g/?Asn1$eHkɌ쨝^GAgLccWjAɀJj&+Ȏ\@{7aogp'S ֮FgZq/Ahcr1h5^nE9[oVGW%w ߖɸ&$s~>YA63?~zw_\~ w_ғSBriZ:{H"A)ʅX?1zʼ%^սL e{XYE$AȄ٠DdGky35ŝAsXu; ZgzX\n7f2dtmX7ӹx!mdxcc돬wQi7#!Xn+l\q$u}eKj=K؛7Ď J>oz⫽~{q:{qgrN,rסlAHE+3–)(Z1T(Zq9%QW4RӨL&A&s9/-@nG&bujlFhg"P2Sw[ZlIhjbКc^k MMږoa 37˅_bӬ (5x^vͶ@Ufu ZQ=AJv,oLE%ƖB z,aPrTL|DͪbRnZiHP= * AI#OX.%I -ٓn. KSL,0,y߼-9y^Լ,<V,"AA:86!O.ivތ1Uj\+M5-'f)[ "@+k]2\&:# 6 6s>1F1: U,%aOb,Qt|k|5?"7q˛WE9V=J|YH>;9O%M`4`^"/hrqN^KC\}vRYz]{?^MjELmZe=)ǸV2AJO3 )}j'#7hfNQ;|(E9 RK:.^:1j8;~ds]&I]ޑ `򻡅րM5JVUa:9c3GYJȭԊ!82 aUD`R jxrQ{REH*e !@ a ڪcmG\g(J)Hj1,tlh \(@FP k%RPr){zB6\5PHpRaCc9̚%0NOִ{7)@wuzf7RֲWMktR W49QYYgj`1u./(;&A}qn-*d;7c.FJ*eQLN=f)I61?PT"M3Jd@/Ws>M+GbEb#:- .~{ݐ?1D~&mv]&6Zkh|%2NU/xӝ VfQ?^*A[neaư:;47{'zFh)r*'83vn[."HCшܨ3 y5.KLbY SAZ"/n*_SPI;m=-͝Hꋴwqn1{!3PCzK2>Γ6GuY'6h>%(@&4.^&iZZC24SiTgv"6]ޥ$rp@Iu"b͈!U6yG-#cbhgPI.:/t\*w^^1&]HEǸ(CR1;\zt2Дu$ ֓804y\DL~@& ɹ1ۈQCvRB(PC} _ IvՐ6H.rWusi80EjHr㞗j ̮=IӄLϣ3^l!Bgk!n3IVwO|AzUyU5nJARTUCd4;DϨ <+qwR?mrMoZ'["cRt[HU1G}#zh 31Hl$V!J9o5a'۞OM%?Go1MP|}z5'r0H *Bœ: IV:dIG42՚3xu` 7 j,N;m2(t\tS4QpaP3i(=|R2S s<U>?,Qi #3>Xۗs\R)֦! H&djtgԬ#:i;gx21rHu. ͔"Vwt%; NcHK=7td-\,ġLrT ӧȃ- .·86L] G# 1[aeׁd@5sVKiӑ0 9Ǵ.N>_#& *cjnQ 'yrezMSUì2 ]X`YX6gF\rxiy$ݹulgR ~~i 49AN/q@5fK*/vPI&EiM 9) $BYme!/aįﻨj)6hnf 9ITR{2PɸHYVڳsKVҤT\ \d@HB`J:+Jmu@OBSc$1thJҨd}{0ZQAJ.OM% r3/ g)[$) }{ԋ焐<"F1rL^*>Xljc>FK GnYs>옭KEti-Q4$`$BȜs*17lh9d*+e5 eq'.p[ 2BSYȿ\Jג3%ǜ859eL4bRJZ+"O)u#DYcǨ\k्YSV7fs[eH"ǀ!!/-gΨJ98m*eek}2%𘶼[q KJyf҉WJb|H:TLH92 DJeY< g#`]?S6ڕO,Fh~~7iseu^`Y!zcDDg$\wM4=7샮IHD2V Α4"D!`HGH8%~n͇`a*[!VAlL rA•^ v6Ϭ;kx,OI4+yy~<.AO4zJIIX '-s1ˑ99 ;4ƻ|Ԝ ab9α,J9$tE yQ:JGnoYt5|xNFXXcғdK&er1:E E$Ҡ5A(j]9)gs>AdV@LFȰ*$<$eJb("*8[rVYOoRpbcD9AWh3"FfEG$p.SCHt)fk);idDDJf.K1Je/d,CLɣW|[x}5}8q\"#F-en 8W>cscDX&r`"Zd7gr2D4U83FU6j͐$:*u?)V7+pD\(0aYRƱ25 BKznϿoS*.]AisMVGi%ۋ:f-mg~f%$@27ga} zEQ=fΤxHYcmS\۬oJfʲk&DF0 ,8/\tF")<3D)nvsϸk!&$A^IH7!:2 ! 3=P`1ܩvPRLUgkdŔ[5LZ)|[|Yo.l:b=Zgq}EC9KBPBEN G9y!@LT痡WcDxi&r;<:uX,S6!r}۸Ebdܕ59zlt\o8QcLG vAܜwk Hs*N+y&TRC#@UƳD )mHG^zMC;I׻Oe0{r7=fwoLKmSJ{P[1OAe_B,(fOl^N{Y.F.I@g7_dxxX#/^nigB;-*XR|8)r'C0m0*2m0*io@ A;]sn/匘%Z:?I|Q;:Ze8}VQ3Ԫs:jԪ*j^RV u(%jqQG^R#L?ZyT:jB3 :j1Z[UPm_X}=^]e4c{S!YAXԟ]!?sә;>.BPbl#;=qm3wX e#3wXbS+!PeX6B;mHL'I!xJ,P֖쓆[ɩ Pe"b.KD\EW(k5{IZQVIKj⯎_֋u4g4l/U:js :j_pjڷ#rZOA®X/UJj:B|a=+[L`ptgꫪA) A}VlW 7BTgVP]F ^q#wfյ[!mJɍtBYjT_YD?C;7$(;|3^,{=Vp׎'?NW9Ħcp{Kxpb±eP q2OhZp9PGas\mبiGpG_v2$b{2Z}l ãc.ZSDg&8q@G]8i'[MW8?ݱEBUC8K|B\psq"y;Pido}UK:]xܡ77Iþ4D!I_En`pܐQ"4&)Mk\\Wa$*k.=ޥ㻿wl5ݎ74Տ Սbe F C'Qz%eIҦ,UKJ\HRX|;/Lg)T"pͯBd Fj U:`au 1Aj s_t=۷oOl>ZwIQI%U%͕dA:l<ٯeb>}ko/bd #Hn!~~Q7 $φ>$"E 2GY"+eSJ&?mGJCֈKa` `w!:Vk}}X3*ZRf1+3zueW2"ODϸjlzfs5c|xY~W L~Yw vmE>zѷ^Gr=8rtídNFCrgi11)HKMU|:iye-G;q l8v=2pk2bT7e)wT7Ϫ6xޓj,?)wiV.Xy 5BS>|o*7G*J[>Z#w˽|[_A?:_&,.[GYG.*]Uu(e)!-ڈ/(DoE-TQz`^E'8GZ(rQhUqQ.Z|qQb2*J r|he|ˁo.K]x|X+s7?EXXT7qԫ WgZT,* jm0LK85fl[b_*^X십p-&6[Ej\CVՇE%IE>3HlmZDOq7k2~D Z}5׶}Wh0vt*5bs#!Sr6ԪP)BV9˚TEkNyVUZKQ3w|~XCaK9[%R4UB:6r`VZBd,0S$Q/cCkC>{+ܽ݅-뿼=p9zQbh>BDSm|qr 3<(ъԼYٶuʵbcLd@Amq}pPvG[=RCѾn^8w֔TD&l6Be eu(sr% y.$D$v QG}~ח}IfSjT1Tljk(Q,U)AWąx&j$~&0({ؒ=) n!T vkVRVBDxML5zO1=muk)jE"+r+ir iR;)F(GeߒTIyN㇂@g?BU|S(1oLy]&;swD@rXڦaƲ^_HS\l{y&xg,;0 ՒSF׆ '[c-rQm){Kg'}@(+<:V&^m⑿#ImWM EwS~B.HkU1FF:WmT-Zpo%C+&D6EsMs  oKXksqPrnu$Jb*# jD zdPv>_- {4ѽ>L-Ġ])c-N2[@H Ft.]6/%[+]IQ1L syn*T Ύ+o- eGYǬkԔ!~Gi%4h.%鈩TЖ_ٲM1oD;k1W;j-IȾRP\I#Bż[jHSۂ1s,9!G}d{6%c"J G`Ge7[2l $&xf㜗\_k(x?pdEʾҍ^lW=(@{@&ψbA %urzw/\uW @nʶ+i@K%IE4QR vPk66@W#_]iJ3.w$]iٵ̿4fA[fݪ+|?dwךCv+MoEQY QW8ȑ8-HsÌV̫f ZE;D+F>#FxS'7~c~?J=B#?l1BrtمtFF~YECZC0օ,5T-#>:D@|NCqiǸK[^ɈF(ʳiZF<<ͩ͡F񊡒mꇦ$*44lr7"=lZU$R0v D@~C86Q[Ү6,!$5XzSlE 4x@q\K4DrRbA0x.{ՒkҮf'=L:"-tD&P6P̘F{tJa^- HUN{)c`Ko֦0 YfdodAj \IMJtP yޚUN!|&%W#$ՋՎ %a.SA!2 I%m}QQj1G=_P&1owvכo^K;}%x/oRg;ܼy{+ Qu'Dž ?ݧrw܏ ?mgӟOm&rB7Z 9eg3=zKQԻ#S2ra*}Nb7!` tF1V)2۰Sm \9D0ut[3 o>kCv茾chӟOlJSk/nÂNe%r”X!nmTS39춢MOhWcaAra*>vf[8QِFu0:n+b6,Tv[!W!Ly9FF"3茾chӟOmje5vt*-+3]d,vcuALZ|ǬulzDf޺ۨ@ȕCtSa''fdtP61kt="QdtX{ !:)(vz{:wamzD׋݆nK !:)kQ0XbWaT51km="nN^w[!W!L9ʡYvpKv$ cgS$ nNfrT0nFqX:T2YXSz^%֨+@ȕCtSҕȳ 4&|ǬӼNWg:,Ӽ !:)mѱ` VaTS39춢MO(f]aAra];v{:wa ~mXЩBC3SSk9ۘ6!CY׈4 6U>j3FmgvY^›P9&-QasKT,RkyUC_}ïaܧ?n޾{yХO/{~^_ׯҳW?>MgޓYZx/ŚS$ w\mk|35Ŕ&t^?) 8r:FQhdD~c 22dو@ص3 f฼6Vg:c09qU(b1ŰJ$.ty*㙨 1Q;gM=)'B.YIY %V=ye KHUEF탖3A"tQpIykwod" 0Zw|&|T4~(1XX CQfU#Op9"Hxh!F{QLG1gi6+traʆ/䜰[q6)}vnq6%۰Sm \9D0i~f mTS39Rnj6,Tv[!WLsWV\!?ec_:T1YIzt2aA'$-+νwŝ(R(35ukn5} :r`J~ݎ-wu0ivv6;EwiBaLĔ>~駥n7iٻlWd&R׾\wOLv' ,uk-Զo0VZXXdQ Ė(Y!B -nN]ӅN ؘ`mF? H>b}5T/aoVBЎOkm d& _|];섑Vɢhzڢ:85ӗY ,*xQY؇Ak~$]IxԕGy3K_vY\਑PXS@cI ĺ+w%Q>{d1ѷ:G$R0$&>8E|Gޤ@%7]#iczT\[ *.- xuB7Yw)I >3E'z6+)iYY-)]47a*cRٔB@XHqGMEHt GP鏻Q2DzK C)~AF ]-F<].FXS]hIFZ8k1{.DրaF P@ާz!O┈xڍUAuW4[CXK= ݚ$R")ϒ1*'y뾗&~Qm74uH\%CqЁ ?8˯&i뷥C:Ӎ Wp9eUn䮬)dɺ^jpkw#T¤("׹{#i[yz/"t"]äCI nT5 ͓SɊmKe@ZIgo_;8"Q{51J.Tb fܨa5c/,V3dž`.oƓŕD+{jN-!5'*T "EYʻw&Tn a8]Ego'zaTE24%: ZQb][ 2_?:_bJzKʵ w2f\P{ 5^->V$Ks!w#ZktDsP}:t_JyJ;ַ@YCHN2F tj_x^Fdϙ^mon!=Е1vܞUVAg@@}(2Ce٫N5dh)o:VC&abS !ۺIn$Ԛ~y|RIb,\D?Bk~柆['PIrYNLmMk\?vYVz\|URy, 7ǫ,B/)ַiP_Em&R2튺KwC]ԏ%b*%& \;:B cMmGǫk<7bj)d'6#X7}X)<6՚<5vyr]$ ': "hӲ09O J;9-},+~;CQDߝD/9w7Ux U|97oyHHN#1ȵ,[bo=n(W4Ҧц޾Xp~ɹZ&5!kO2 ge|iNvj ć۩aK}%ŒZ/k<"fD]$'uUBR#<JTUs3ho7/kG) "»dHWt38T oWxKryÝ^_0dGhhu5)YN3 rc#l~s&pyr烌uƹDe>`8blZ: paE®']zI@Mm(|:62^}㞨6͏BemDaTyפ#a«ukeb`S3ex X)AqI1=-u]>ϭ5HimdWT8o k'D1m[@TH6-p0T ?d3հxR Khe#=5澄hkjmX!N hlOLRB_h%hԧD"Nw?'bfyr8sŴ$,*ɞ~U1`/5oh^dlѽhy?dCaG8Z+KŬЊ*kFe-7p"8fw #~CPFr-Z ]?֟NZIYөTDN6BQy89ks D6ag[-m߲bMFP6 `\V;b;ϯ6&XQ(d| ΍T"* Bw穕 #AJKt0ee&WP84phq:l{b9^RDN*F:aܞTW _n{K昊$3֤9jg5?!csedL\Y س^ڬ ]3{EF,/DŽ[Z8>)B,GOFcUrc"f\EAUAN!EYwH\`UWns2( t=-z ¹9/695 ݛt:+U9fJG*$g4=6ܮ. LI;War5V2*Wujc(j#iRY $Tt/,,Ğ|g# INggjycvt֚9-va$$AFplr!ҥ$CbY2N 4,-NORYk % 3Ϗ%=Y Λc N!8{ؙHdC@Jѡ#qhdk͡0ê B4(N.FD'U'c %qЃ=_bdZk) lR2&AbszagcO RdBS4gXW(kh,|!88[(@%lM.F:#br ^jLX*"%ۋA'Ǧ`/5P&9S$l/IT/4PH v D‚avۋ.g{v$+Iξ *T(F)B+´Yhуp,eF(gsj\T\P'C8L>lF*QC ~\oC/|"=64öh@R]$$Ɯuߵ?&-˷繵ZCrO_͹1 5Y8f %'~9LGq`!,)TԪNq4Xec'%lJ @daA*1 #y1a`3.'!9ɕ=%c[rӕ?Rc) /?}VP \')%L_Ex34sw;7)K D%}=nHU 1gQȰ9Rp֧1\q>%1&\\)RRD'sdJ$J hr-{UDZTZ"pirFT:]}L &g6F!ֺ$@yehc#I,C XLBTDH}!)Eh+헳j6?jx @7W)B0}Fi?R #%"NTQG1cHK&Ѱ'ESG9%}].sxZ6 :—U=.iޗ6@½{ɨGzcrڰA#=%Ӣ;AN}$sM6"/F;ޟES,g[ʗ_kfA2,_ =y{9gA^_l{`Q^%zҼqooś\̂r_j`{-5w+Or7 j}[$)Weu/S yӟq5uuuV(MAf}kӽ%5S6ߢۛR{QX+S8]wRu܊^dXeoߤ@W~++=CUM`Z3Jp־ŀtJap %w"zxb+BȈbO,BÌE#7Z~ˆ^B9K|fRjԂX籺#Ma|Z!{m1ókVQ`s*NvQ2N_]]`e|MټFg K)]T;^jFA*ONllV}eD8L䞺(f qebq*: \=x{*V}}Np|߂hqvg7gW׎a2\ QNaQпpjGq5RRBPkENX.,FK \3k 'g㉥ޚ(˧zyhcsB`".)u;M1D!)瑖FBS{Gڵ=8j&JW!K7ѝM5A๤Z\,C tTc''d"V.ՒXg9A̾wX΃A2R|`:ZmajS$םd?DMoſֱ<̞~%8ԟnejw,20e|enOM`>}O?9>ĸaH'䞸xI&K>KS.~qUe=Y62.J mesɯ+yAUMN=CYds) * meɬ",3tl`+?Z)lK3]w[z֘A11HhaMc:i*u/$S4wg 2,bה%BHM0doXuAɘV8 d1΂K + BqU-*ƹ9҉ QB"d}9C2,y2ՕiSX:,׆ɆSXnkavS߄GiԴj/%%L\ٚ]OxwV@M+ڛ:]u[rx*:6 0dtCal|;Ռ^7bLoW>F'ꈷ(ɑ:rABJ_|jQPz((#Y޶88npׯPֻ%aL ˔8f 9{U$%' ᘞM9(x*OmBOW)<͈4"Q(r>" !Kx.º]䯏PmO6inDSS2B < f'Z4Ddi~':.5WK, oNHZreJԿ"*~4 q6z2_n2GBbvCOMdF0yWN zffw԰J, 7?> g=~U"Z'?Ū-ezto'RnI']*1nzM?i5M9xGa^ji uubI KR۔&{C[C;} _Z@.42HZ *FGEP{b4)+4IR`ƅ=IƔi>Q C6bL:){Wܔ3R_h^-Ȓpw HP{J: *OP4lh6rm+.uz]qbTU@t K] %Sf %YϘ_~m3׼ߵx-^+*&Ԉh=CZe* #_!y%%*5K7rϷW֥v>/XS\-T :jWJIԾ%[,CYh'8_ mdcmm:,Ӓ};ƻv}ʽЂs:Ɠ;S-AZX^w%je oD8ק/j}F>S^To+l!9D{aLZAd˃q=jR'w7qkډ@g^2=gb#oeڭ/jm>Odv7dvB>svƔb˅C\ڕC?H ]]}T>Uo{H9D6ka(;9@sk=Z} ķ4=I{s3Pu7n>z4kV|`v_{hڒ6[}Υ$ȕJT3վxs|psQ22X@:w\g1Ä́1X#فi%bn_n|l5sqk6&ϵ V@sYfpvXn p^} pt4THh1Lx8XPI΃a97; L7<!'mY踵`w|b0GC;XNx! F1$iAPf sd@N+kKI$ZC-;FS`Rco}ög>MnhmDX3q%^]-vcioEڣ5\CT͌ͳJ!HjkDJy`x8!ZtCcT$#EJ^mVBlzsiM2(Txy5wXx\~"n|;bH'\$ YxuIiȞeɣ3Hd8T981CwDhɦvqj`uF'o$n f3ED/-AT.A &ʏUo>d),mFBB]1[?QH*f*Ib1yHU#w;XGҘ0ίϖ<{JC:kH8ݺAt7åxxX^HbekJBPz(JӦcGPBlb4 x*m<^K<rTcZh9sA2b? M{+c Z0τsBzg|=pZx&56e|t,E hu*N5ɶ9BCBIq7LǤcF+>.ٙb&~m),$<4I΃qJ'bJ&ƌk͍5Q'xpB X*T*(d2h7Vc}f =~#+" R0U";b>Ȑ`I/zٻeWH2<I؈㜇kci H/ o NsEydGZcKnZN.r{2;\$FV20.0CXF́Ek*k-#\KLJ%08]9(60D`u@k%aN`ebaR!:#aMܡG%Mn ,D%fxm1IP;º1'O4ubKGE,Dp" BbRRp10kH3ߠPH0&xo(+XSm(887F \^#b//k?)+m0 t2N$b#AŽVZ, H;"F"LE!ӵT{JH򜇆afk6]ҳ5g^MJ`s%({iiޔ\nY%/[yV@ ݐ c(aJb=Ok )(qX w[RZ6*"]=!X3'=+Ƃ72@7L&]'J}JUAn5]נ5|q{;e6t"w~6MRL_){IUihv `!#U4_S?AK=&TbШv#d2@FfRG/7mA4`.|Xf{R[G0u5~-u+LzS aH2eS޴2P?P bX>XCoֺ3qRBzJC6SoYp)vHQ(3M8ζI1X+ٞd׾o_}o^8cJȝ՟nKa~.A)C6{* L'*تqX)D1#3rlebQl8I[korR*!>"[5znx"Ks<4$='nC Kbp8J ؅a E-7;6 %I("t8YRO܋VJ.;1_ӻ[ư( *r9Oo'7~V^/'P1pn<%Q?2;s^ A##*G9flMɃ Ɏ-G44ںΞw}l:^y:zY|Lv(g-N2`~}??cq+Km=gyc;%epӥ~I" dB32Hǃ-w^-dT~ E;iפwz;;叝p6ф}B~\===}V5ƽa'JI&aD 4x"sVQ-IL8a GT*_͛TNg$˾躅٤%t }6PMw]io^\D;4-%vL?-.ZwDSڭٿxlP0(PVŵ/Sֻv5Qkn) |P> +v͡|J=yt"!UX Jֽsw> e|Sh}a^l+3_6`wlA4uWW@_aŹnbdӣu;b˫vP h:f#іZ˿VC=frK~";ۏS9-xYihNq-{k ~՟'2 d YkHq?ﶨVSJL]gq=OÇ֢V=/  y*ZJL=- Mney:eߑǺ2Єڬ[<[UtCN.64jTW ;Qez58Tk>†i't>ԍ$#PvDAX.Jz:qб_F 쾫5fF;֯;-\]k8fӛ0޵?N+@^ռi f i4jߑ*Ez@% T;äo7%lh0GR tx&覽dNͣjt⊙NĊNS\ "MD-'/MfʁdufCSk1nf''_P fi312)\SxIUrCm'4ZSo4lp+unuR:ؚ Yd4ǫs NX)斸P L}tCE$4*!\kG9_1nP읃uIE9_R +L}[[w.dT5!Dw9zɿsvʻ}+4=5)5X?Ѕ|r(~;@YT֡OW{[w ,Z4#eje~sP-wV?R&b&/nok n<=h4JV7ӠManv 79(nI#X=Wo8NI!ݗw!IC\")r~'kN3K %+PdDJ,UGkJ3f%eLYpLخ%U׏x<+? cTϷ=I3OsQI9NɞwXq1X+E~EhPU^\JNLid:Spw[]AAkNleIc; ٻ涍lWX|AvzQ2&7u3eWd&.Ř"Y\d4RA\D$)A82n/~4BУ[̉CfXp|c!9|bg"ܕ {;U6eK{7kdȮgtZYAɨ\c;N.bɀРBרN֯^N 0j$Z%ټaq|r73[P輷_bbOl <|h%_&cnU !j#BqlpBAVФfjk}Fz\LX\E뒻a3@M7';|N&K_NfςB݋dVbƟ/~o-8廖y rCCQ7t{>]>*TqAvqbk(T4Ygq`{9C =s50BvNFs: r9ۊ%w30H5'O4q3ȓ4$Z4or 8Z.Uj:Z&#P!`2*cFi}2L)Doٕ듔8tVH9Rㅈ3 >euZr?wmeq a2_9b#L'#ʯۮeC_z6Mno_nkMa}d{F6ݡZADUᏄq/%.]6قm>mhsr,dOQpgrtFDHƩ&js>_xC&rff 8֪ Izd)7t7 MO| spRlL)MNvWRIHzB.gXn 3 (HhsЧK嵸k5aYװ$.qq kr r ŗAb}O[?ŷTaJ#|{e/G >la@1/$AHZn&Pv/6Z њQgѧ7Յ~c{"_d&~SBUkc.a``uᇼTXϢ΋%vza'< ;lװ7֩?z46PFԣʗ%ޝVٟ lGq)}aki >s#?K V ez>S:>aꉐ:ԮИ"hN#-(BXJbiQE24TEX4iE"aƳ1#4`^2(SK&4MS(P2PHTpRnZTcqlO7N6o7QD,%꼍vtxίn̓B\llыϯ#}{xr-hhOgtI5tu(pV(_Jix^Ca> B14IXߜ/%@2.I“1Ƴډ*Y^qYeM!˳,TӳK'u(kŇSonWkA|b3cDԊs4! 0ߟgBѺΔ6ǭGAVdKkgiYaPw%aFiaC6#;.|]VC4 #xOpS^{ 8JhA'$(qU 3#ԮE]P!,F$g ;Ďf|?qOw{]}S;wY 7̸5Ï䷰7++VU+bqĎYƨY܅ḑWx*e4/t=}9{gm<ВyPe{lO17iSm ?eZ\xNx cuT S{-;߶u$q8;|Л̢yNnW+yu?!!*#p_VP;Pe^wX[ZPeAkE2Y׋ :p4\`pYlsAJXRyc.I+#=M3!oLqL67{yoow> ,AbD Ih"cIĘCi)8iX$RKe^?3fU& QWUtٚ5h tCPMqQ'.&3[[ܝT~椶B裸,RxX*%TZJ 0cB#[H2㑦2- Dr%4pJ*pL)Kf&"fIQ4QwT@4He*(4:1"HF!""C])IpI)246ƕɄ12cT#:2HƌD)e`,14&:NIUG0kB+R@'hT)t<0]P# )7 GQ M|51?my=tZnuK?(Rue*LMox̽Rtwsp=窋;onm0\wW9CGY #D=C/ƳSDvv%] ]&t݌ [L@a8j_܇^7}Bx.go?yƉQF3 (m!o܆qmț,氥`̀$J("bO -$2֚ I4gHM˗9lUT)÷^ 9STc ,`4Dwe=HeU0Pk 5} ݚ*똶{}:)yH*6/AF0;1*2O1r XS.!q+>|QJBji̢gC$ [ 610e(N>$YDppd.8͹IetČP ?Ѡ*N>c1 5R*pk@Hx'I.(m~Q^i">SG1 EV)F˽,X"8ws$"1:#1y@9F(ikNQNQ41e(ꖬYmzoW*y] :w)PTv8{f=8TD -SԌF̃!`<=`IQE+)Ƌa]@b~xTUë!xW[C:iS,S%Su 8! V ;_ {gW 2E2/7rLP.13.03'@y!&$|XGO0ʻ&}X  OI[`F-gkV4!K&9NUd5QXq0a;r&(Q2gd\[gNWtWǚFY.(?sH㕕3ʼnQTٔ^o8)eQWԭl@|߭x]q­K9&^hS)sÝ.;S,⒎"@O"Xc_|G%r껌ݩ`qd  iUV`5.z$ ; fB`gE \0{L@C )X0FI)RzCp`pnP$;(c[K r+4ծ ,,(F &&ja8 Gkd eb= w-XU%!]A/yʫg+`r&D? ox6oKbGg}|=2[(E haf~ n{庚mxyHxfh6xm׵ .ҥ yy(8A@k0&QQDw>*,h BpKSm/f;"#̃3Ll&6c4pƺ@ &5#H&lj0&QNYoY71KhYB-+֋diֶ}jNĐVZ5XIJ5t+o=8+klJ]7E4[l38隃FR+FX b&dV \Ԁ/WN3̡# $J68F:F_;FAEj'M 8_ޥ\%HD<`*)3#tZp|OK4 :?-N8K)O4J:*EuN b)"MurNocA$uᤀ$² 2NM~_EOW |vNjf@`C`*'0;H4iZ?ؗJBeRas]@^9K&aP!>-NcG2߆WW+ wJ"rJvƨ%[BnӳPl38)M?-Y>ɿOWqlLۙqvɕ9&+Mv$!ҙn `m3Pϓ޼p0kʾ!Ϲג#.A5ēaI`ޮWja༟7c$8|,n0k= frPodPe&n"h3ui6H9SB-Pͭ5Aӳ֥5ᥣ'jt*s!y@$ƫGq=rga+⹴4wE)xݶ+N ^g*|\ss+ŅkĈu8S{M՘q>)sWQt.,>6rB;ƼME hDD ,ײ1Ojܯ 3*qav:SfY ʧ <ޯigxC leA{5B1ɧG2-c\5OkaLbb+pFe)k|j,p%9ך7yb29Iո-evZ%9:Qi]_^lR/-v P p7׫٥7֭i*с ӥ\^%˼kNme^;,^WߒgݯA`U #t!g]Wjl NSKWE ~]?~:PS5𖴕GN#sfbr: #3+/r H wd bU*9 Bk*3zz e{^QJv@HR;,6r)(z0Nb/1[B1 a31HXcC;AI,Յ N)~^f];Fh֍4㍇Jo$:xYK&/Ͱt9GD)2s*s*b(N3#VǾ!M|MLXocmq.›YOO!%KД6E ,I1>ш3k~g ^#Di1)XA 3%7Ӗ.KkwC+yskum[l|Byjj]) ԍ#uٱWփS4*Et(] ^vSqHwaPەr7pxCABe1 , ԲNvujcgCN(r+PkɂtB0B!$!wI1V|FQ㢷&>kWؤW5rgBz%n^FJ*8~+4b̓ F@QT4]h-`-qT騼NZ#ݖsNRw`U J$>'҂sO g&N1JGSNA"X5ɷVr`ly+M=}\ߤ[m^)v҇YWjſ%q6w|`R-VOG{^~nAtӂJ0_|q5hT}}B&}2&rI+B?(ܳfކEka%hO5R.]>nj~ ҭ)t4e2 }KJg"jVPF[)9S:FvSWl8V\VC,S4U'˻u~-,cd}rV0匵\|ͩgi6y~sa}r^h GH})_3<("JpD)Y(],Kwɀ?$CJHQAbucQzZ|nYl* TAN84I8ʍ{lAqΣS<'0mZh[ViGs+~̡1~V nx gR;!azIkeU7M E#mh{BHJ3- Nx^2zW7$=Yzpz^ ܀ܼF6ټ%N䦣$f)rO5~\ٺovk> YRX,՘ΈT|V^dJAHrjdKl%c뚅mÝpu "C1լj$E&uf}T+,F( 'E;CBzx7ī[O41|k{A!qߍL!hqz[Hr~}u[UJ)ҲRJ4Y>:)_R'+sVl[ 1XƞȰV ;TpXF}ntә3'xvK-b QڬA4֨y\c2oxqooHxZF#6|q(hqAC&~f'G_V|nj*;ݭ#hi`j TvE5TV'UWTRe.R6 4-7B$5lIeLKEƖTάZAG#D&T~U/IX)PUF["6`}@0 M+^so5KBttg٥%WIU//R3<[hc)v?^X7O QZm6JMPfX=vvF1Cj`vlqܵ-i_ڡl+=Zh]_W%_.!%EVoZ?urQ c~B-{Hz- MF=!1w؟Fl]{VGz1zӅQC\b#WcwZl~}F։7MG19QREJ- Y6.=^E G? |m=겵+owNM{k{(Jځ"Ci#mqݝ.FXApu_hֶ%&THH. 'XW1ғ@Qlq _H 㷇PSko/={IGγ(wF7Kхړ58 (r9D!dޅTvH ھde fZr6puC̦ 8_TbPP{l$0=Dޠ 5Cf_xQlhE)-MSFvZ $[.:D:D){s>BI$e"0ڲԊvJeԹvJycB-ӐBF\o9.f )zH vJ 2:4 eq켤"P)O!5:Gȁ1:Z [Q1m8LVz' #!5G;bdd)S,s g,GIN#@lղQ:j4>9khל1s Ϥ\ Aev pR8x3N4t?xh`Us[!#"k5:F=cL Y)%UQjNx͋z±k wwߞ*K  mV cQOF1)Zezӫ^huaC54:X/+gWW∬Qb -Vc*d'}Fdg  pӋ\C>Z*"_P+ :W! ϸ+F6_oIrcvv};k_QC|Q 59v47 ʅzt禾P4b3YX5v"-;jq4:a6vئn&}\N"l4lSB+c|Β,t.Y ۥ& W nT>p'S]_Ls{wT։VՊ*4<9lM~q?b27y9ZӛscnLJEyV[YmgM\ Nϯ*D%U{; 9I,+&N P9K@i'q[jF4=?z[p3֕il͟9}̍z>89Ob-[j8UmtFm|Ju-oE.w*$'!.yY 55؁c*P|C)Kpwh~ m5{CG|LwĚjo]C1{} !H.fLeHڃ0IDK1)Y,ޱ~6['}l{ \lr :  j%EB:Cr6ϵn=sOBQ0?A(CAFiZFa.<)!L0u0:CXh믉(ޓq0jNVJ4Þ9}iD) ݣm.\ݞt(j]wSҵar ,}6$t߿ZضGI!2j[1_.,;0xSG?x{/wcH9>#( nq=+Ӻ NIpGx{|AePa8o {3w+%9}dpen4F7Usq;4NA5W @fHg`.5;GZF NoTZлv;wE56,O ns}[H}] ; [hx}8;@ض2R7W-1wdxLojk/'uqӇbӽ=ChF(J9))+}R꯬6JB$~Έ>JUٔ Lt<&' .7uF àY!-# ׷aJZa@NCs!VF-}o`|[3v'@-gbENxg8!S B'MD<ȸ fAI!5 C ♹t/j Fk5(:d~\Ի ".:rzTVXxNXIzUT][o[9+BLj_vڻh`g3/ؖIΥ߷(9QtKsW.)ao߼:by 'ъ,6T=/G0->؀JKn`:y6z]u0iڙ(*Nd&咧7QFхVB KUzglUFy)lѵPPԄXPeUl_?Cl =R6PG1PJ)AGk5zlYaw{C@M^NaOS|%NǹU'E1%Qt={k;_(:r~MɥVjQZN# whʇ|2._7#EiRj(᫴B |p^w ]+lM2V זuDoL}_DA8|5 p3!{[O&M\LoH˘jM @ǂ{1̱/Vnl0AΔF+yr1lsa{)v~g#DHWy;#n&Q|z9&Dq21Źĝ۔bf'ϖ>ꥶ=l q dJBk#nJ.v7>{6_k؜FDv`.ZxTMEgwݥ/5ʤW$&%md/FQ%:'5[|=օhó\c"F\!Eb9ܰ9捚ͽk7TYg <_feΧmTEKz {vV=X^&]mEҒջO0W%8ǀMDf X ;~4/N9Cy=j-|vYkWu3fB&/73l3Fӟ4>F+hH ^=QQF_-JL6N7C0XL},w(eiSmMl]$D,dx֙PL]L*p._ o{OEyԲ/?3@?ޝz2.ko3]z+u̪"JݲPN^W}Ve$Vek{D5o+XLv ٠MtRR d*tͭy(%60KǟK+3$wY.h'-O1#CKE;v ,km|8 μ\o*ׅa4^5tց&5e7yMo ~/](`otd: Yuۭ6g4KMTVJ>>$KkVkR󙥇RIv: ]V~/ëeϣ}heՒלPT|HC`F 6 LV*@66[G눑;/Hpt*L9HTMbU@MUs.Eb-.gg/| ,ʩE̿:[bK*hön=Q;&0;{ c\*N""' 8D2w߲ 7u[8+Pj}uV+$R BJ>G'Lj[H(ߞЪ첄3ye: }2% Bih`Sw'ݡYv}ZH AR0[}[}|հb>td)3.W]bYYVZr95Od*R]A1dH VCn;hZ Wwcqo GΦhL{ũZA1ΛqVƋ>3HgP(fi9cŐQ@$Fe;ۊGwbZ:O {DpP] ?.FH9CLUn>RI]?;@yH2FȜ\t8 B8w q6H)ehC7"N&ٜ-DH!" qq֐]Ga-#UywI1RS8/P֒;cM`Ȱ#JQ1Ф"kNڶ^ V(ۀYQ3E͞AGyQ+ l(R٤|!p$ɩoi'~ #dP̨9vU.k-Rڗ<R|G dp~mkhHǬ<+J ()$~c+HR@> Ҵ]&9 Qujn^͂66lu{{lVw~K}9I~N=cqdbg BS(Jt/_HVB S0Y JN-pRrLBF@ςu6i YC 9ܳ hmm: ɂЃQN=1L K"˪9D-bOG(/!(ES􎅔%ll ox=Z )i]ꈖ@9{OE@nh'Wo @%pVt;ɋh  ؿv-SP{ݯy -z˨'e$I?.pj] lѮJZT]T+WJR$r89f};f/d4U%3 uOZe7N n^xZW㷣 1eAW* `3LBGjoًaZ}EJaaC߀J*q;}w#d,rzTtKF<$.R&=nk[LdcԏQ; "T&%YQy`{ }*zE`^=WvUu n?vq9"X-ťy얘/Z =>\w,:ڮkt{[}|ոzp3KTQJ!Yz_oZ},=DjR3+~HjEr϶YKqQ?uC]7.[a[Ԯ:OfETEC/>vO\2-gPթG,K5%K ]$zQoQW8TLs7uXT'}eKqhnV6C ;Hn~G]}c$;x{>DJv9AX#KՐd\o[5?_ fi*r.O*i7G_vq!]uɖ'[V1C(bѳX1S*&2&P)h*ciedl [}-*-Nh[ C~OIz_@?M)Y- `9N_\p[Y>0.q&3iNXŚxc 1<ߗtr#>9(St`.bAO雫Һ^" 1.2Nȧӣ.^B/;Es~Η0F(L؋J­eHfįr:y]?x\_p#"k>8L0en<]1Ғ?RռLeIKW;]px|w5c8kHŏ4ίoרQVڝktk3t:\u )F*Ik!Pr6z5kq3 F/=`%_R8ߝ##FyrdfRFr JфVŷj ܨshe}E |QfB7lPʠ-cz=v=7Ld[ߟM{YVy[|a_܌.˖7@?I-b3Mi:]h7-6j Q)?mq^*sL8zrdk >)UΖ!O[q/_my붭 PgwlbZYq;Z>)zS#P"N]mYr6:ƞdzjņAi<~2E]xjoZz۸BЭ p|b HĽ>؈_)^r%Y6w^`֊;8pfQǨ>YSQ齎JSR \Q}J5 +P(AëoP^8VTS{RiB :+CzRUT3](oHԅP\I*?N*<Jq(,<@)`J+#RPJLETR5=7J9C)gcF# w|dh +}<3F0!5gɜC2]y&gh-6jA6Z) k@)ZVUTIRhg-LFK\A (J5xFFX*O?YжBޜ]D?~}txU'3WޜF)Kp2".#-,e*aT!F[ޱ07=U:nrAndzȜ1jjm""PD._ } bQ)EjX~a4G`շ+ÚgdpSe<-bӛͅz \vpOqRP2z)I&)N:3PIWI)5I0"`̪h5QRj.9ǧ`XfFy)])-V. cxDR7wOM)<g撍J(@rsVX(23n<](wbE(3hP&QK\\;/g@_?!{ ~)&nSg_-.W3yW0lLI&+M",Te&1B[<n"3nPVX#/]dplTaJǺ7yuo-HJ_mFC{mX ʠȕ LSeD y< AQ\9ETIldW-yw"sNgq~]cBUtm ?zP\uߪ|:)ɝ]|~BOyE?Rm5Ļ>t~/] Ո(_38nv:ȇTg^~ypyqJppN0m(j}R4ʍy<͸!s_rcpw6-:A5n+n@9V|UЇ܅ :TDq>i꯿[f`xW5!~ؗ]bz9D (ۙ` ])Q: D:k&yDes[|is'dr{' HĘ;tS] g-Nl;Wĝ{?&OalQHl;.R [,Xt کb-`Vԓh ;f2ZQKHهɝ :ٜ1 gcwIWl}*l-2Q-xTQQ<%ZjZ%Y TkxE;B_Desͩ Mhkq9fR \mΏRj}5"м ;DNu?+FFF=Ւ;lʑʃӔC.S:kʑnF6h!?8DS0˺vc Rywn;4GQrδ[}HB~p&a޻ E/[*b:mniLHB~p`JFbW@UE?P-b߮/.var/home/core/zuul-output/logs/kubelet.log0000644000000000000000004162667115137420634017717 0ustar rootrootJan 31 14:54:39 crc systemd[1]: Starting Kubernetes Kubelet... Jan 31 14:54:39 crc restorecon[4710]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:54:40 crc restorecon[4710]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 31 14:54:40 crc kubenswrapper[4763]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 14:54:40 crc kubenswrapper[4763]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 31 14:54:40 crc kubenswrapper[4763]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 14:54:40 crc kubenswrapper[4763]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 14:54:40 crc kubenswrapper[4763]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 31 14:54:40 crc kubenswrapper[4763]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.780478 4763 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.786762 4763 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.786799 4763 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.786811 4763 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.786822 4763 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.786832 4763 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.786842 4763 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.786853 4763 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.786865 4763 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.786874 4763 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.786883 4763 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.786921 4763 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.786930 4763 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.786941 4763 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.786953 4763 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.786962 4763 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.786971 4763 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.786980 4763 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.786989 4763 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.786998 4763 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787007 4763 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787016 4763 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787024 4763 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787033 4763 feature_gate.go:330] unrecognized feature gate: Example Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787041 4763 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787051 4763 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787060 4763 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787070 4763 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787081 4763 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787089 4763 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787098 4763 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787105 4763 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787113 4763 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787121 4763 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787129 4763 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787136 4763 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787144 4763 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787152 4763 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787160 4763 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787169 4763 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787177 4763 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787184 4763 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787194 4763 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787202 4763 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787210 4763 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787218 4763 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787225 4763 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787245 4763 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787254 4763 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787264 4763 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787272 4763 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787280 4763 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787288 4763 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787298 4763 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787306 4763 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787315 4763 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787323 4763 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787332 4763 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787342 4763 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787351 4763 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787387 4763 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787396 4763 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787405 4763 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787412 4763 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787420 4763 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787428 4763 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787438 4763 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787446 4763 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787453 4763 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787461 4763 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787469 4763 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787476 4763 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790509 4763 flags.go:64] FLAG: --address="0.0.0.0" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790538 4763 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790565 4763 flags.go:64] FLAG: --anonymous-auth="true" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790578 4763 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790591 4763 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790600 4763 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790612 4763 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790623 4763 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790633 4763 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790642 4763 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790651 4763 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790663 4763 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790672 4763 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790682 4763 flags.go:64] FLAG: --cgroup-root="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790691 4763 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790730 4763 flags.go:64] FLAG: --client-ca-file="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790739 4763 flags.go:64] FLAG: --cloud-config="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790748 4763 flags.go:64] FLAG: --cloud-provider="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790757 4763 flags.go:64] FLAG: --cluster-dns="[]" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790783 4763 flags.go:64] FLAG: --cluster-domain="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790794 4763 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790808 4763 flags.go:64] FLAG: --config-dir="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790819 4763 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790832 4763 flags.go:64] FLAG: --container-log-max-files="5" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790848 4763 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790858 4763 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790867 4763 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790877 4763 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790886 4763 flags.go:64] FLAG: --contention-profiling="false" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790896 4763 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790906 4763 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790915 4763 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790924 4763 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790938 4763 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790948 4763 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790957 4763 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790967 4763 flags.go:64] FLAG: --enable-load-reader="false" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790976 4763 flags.go:64] FLAG: --enable-server="true" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790986 4763 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791000 4763 flags.go:64] FLAG: --event-burst="100" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791010 4763 flags.go:64] FLAG: --event-qps="50" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791020 4763 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791029 4763 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791038 4763 flags.go:64] FLAG: --eviction-hard="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791049 4763 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791059 4763 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791068 4763 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791079 4763 flags.go:64] FLAG: --eviction-soft="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791088 4763 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791097 4763 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791106 4763 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791115 4763 flags.go:64] FLAG: --experimental-mounter-path="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791124 4763 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791133 4763 flags.go:64] FLAG: --fail-swap-on="true" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791142 4763 flags.go:64] FLAG: --feature-gates="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791153 4763 flags.go:64] FLAG: --file-check-frequency="20s" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791170 4763 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791179 4763 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791189 4763 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791198 4763 flags.go:64] FLAG: --healthz-port="10248" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791207 4763 flags.go:64] FLAG: --help="false" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791216 4763 flags.go:64] FLAG: --hostname-override="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791225 4763 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791234 4763 flags.go:64] FLAG: --http-check-frequency="20s" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791244 4763 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791253 4763 flags.go:64] FLAG: --image-credential-provider-config="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791262 4763 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791271 4763 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791280 4763 flags.go:64] FLAG: --image-service-endpoint="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791289 4763 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791298 4763 flags.go:64] FLAG: --kube-api-burst="100" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791308 4763 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791318 4763 flags.go:64] FLAG: --kube-api-qps="50" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791327 4763 flags.go:64] FLAG: --kube-reserved="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791336 4763 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791344 4763 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791356 4763 flags.go:64] FLAG: --kubelet-cgroups="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791406 4763 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791419 4763 flags.go:64] FLAG: --lock-file="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791431 4763 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791442 4763 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791455 4763 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791472 4763 flags.go:64] FLAG: --log-json-split-stream="false" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791486 4763 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791497 4763 flags.go:64] FLAG: --log-text-split-stream="false" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791509 4763 flags.go:64] FLAG: --logging-format="text" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791535 4763 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791546 4763 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791559 4763 flags.go:64] FLAG: --manifest-url="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791568 4763 flags.go:64] FLAG: --manifest-url-header="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791580 4763 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791590 4763 flags.go:64] FLAG: --max-open-files="1000000" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791601 4763 flags.go:64] FLAG: --max-pods="110" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791611 4763 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791623 4763 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791634 4763 flags.go:64] FLAG: --memory-manager-policy="None" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791647 4763 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791659 4763 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791670 4763 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791682 4763 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791738 4763 flags.go:64] FLAG: --node-status-max-images="50" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791749 4763 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791761 4763 flags.go:64] FLAG: --oom-score-adj="-999" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791772 4763 flags.go:64] FLAG: --pod-cidr="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791784 4763 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791802 4763 flags.go:64] FLAG: --pod-manifest-path="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791813 4763 flags.go:64] FLAG: --pod-max-pids="-1" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791825 4763 flags.go:64] FLAG: --pods-per-core="0" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791837 4763 flags.go:64] FLAG: --port="10250" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791849 4763 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791860 4763 flags.go:64] FLAG: --provider-id="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791870 4763 flags.go:64] FLAG: --qos-reserved="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791880 4763 flags.go:64] FLAG: --read-only-port="10255" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791889 4763 flags.go:64] FLAG: --register-node="true" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791898 4763 flags.go:64] FLAG: --register-schedulable="true" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791907 4763 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791923 4763 flags.go:64] FLAG: --registry-burst="10" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791932 4763 flags.go:64] FLAG: --registry-qps="5" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791941 4763 flags.go:64] FLAG: --reserved-cpus="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791965 4763 flags.go:64] FLAG: --reserved-memory="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791986 4763 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791995 4763 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792004 4763 flags.go:64] FLAG: --rotate-certificates="false" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792013 4763 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792022 4763 flags.go:64] FLAG: --runonce="false" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792030 4763 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792040 4763 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792049 4763 flags.go:64] FLAG: --seccomp-default="false" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792058 4763 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792067 4763 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792076 4763 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792085 4763 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792094 4763 flags.go:64] FLAG: --storage-driver-password="root" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792103 4763 flags.go:64] FLAG: --storage-driver-secure="false" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792112 4763 flags.go:64] FLAG: --storage-driver-table="stats" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792121 4763 flags.go:64] FLAG: --storage-driver-user="root" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792129 4763 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792138 4763 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792148 4763 flags.go:64] FLAG: --system-cgroups="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792156 4763 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792171 4763 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792180 4763 flags.go:64] FLAG: --tls-cert-file="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792188 4763 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792200 4763 flags.go:64] FLAG: --tls-min-version="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792209 4763 flags.go:64] FLAG: --tls-private-key-file="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792218 4763 flags.go:64] FLAG: --topology-manager-policy="none" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792228 4763 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792236 4763 flags.go:64] FLAG: --topology-manager-scope="container" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792245 4763 flags.go:64] FLAG: --v="2" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792257 4763 flags.go:64] FLAG: --version="false" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792268 4763 flags.go:64] FLAG: --vmodule="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792279 4763 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792291 4763 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792575 4763 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792588 4763 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792607 4763 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792616 4763 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792624 4763 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792632 4763 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792641 4763 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792649 4763 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792659 4763 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792668 4763 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792676 4763 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792684 4763 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792724 4763 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792733 4763 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792743 4763 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792754 4763 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792762 4763 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792771 4763 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792781 4763 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792791 4763 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792812 4763 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792825 4763 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792836 4763 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792846 4763 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792855 4763 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792864 4763 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792872 4763 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792881 4763 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792889 4763 feature_gate.go:330] unrecognized feature gate: Example Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792896 4763 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792904 4763 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792913 4763 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792921 4763 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792928 4763 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792936 4763 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792943 4763 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792951 4763 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792959 4763 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792978 4763 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792987 4763 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792995 4763 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793003 4763 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793010 4763 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793018 4763 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793026 4763 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793033 4763 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793041 4763 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793048 4763 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793056 4763 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793064 4763 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793072 4763 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793079 4763 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793087 4763 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793095 4763 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793103 4763 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793112 4763 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793121 4763 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793131 4763 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793141 4763 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793150 4763 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793158 4763 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793167 4763 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793176 4763 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793184 4763 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793193 4763 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793202 4763 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793210 4763 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793217 4763 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793225 4763 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793234 4763 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793241 4763 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.794499 4763 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.811750 4763 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.811827 4763 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.811983 4763 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812000 4763 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812009 4763 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812018 4763 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812028 4763 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812036 4763 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812045 4763 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812053 4763 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812060 4763 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812068 4763 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812076 4763 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812085 4763 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812092 4763 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812100 4763 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812108 4763 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812117 4763 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812126 4763 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812136 4763 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812148 4763 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812158 4763 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812167 4763 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812175 4763 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812187 4763 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812201 4763 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812210 4763 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812221 4763 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812230 4763 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812239 4763 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812247 4763 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812258 4763 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812267 4763 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812275 4763 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812283 4763 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812291 4763 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812299 4763 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812307 4763 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812315 4763 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812323 4763 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812331 4763 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812339 4763 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812347 4763 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812355 4763 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812363 4763 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812371 4763 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812379 4763 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812388 4763 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812399 4763 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812414 4763 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812428 4763 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812445 4763 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812458 4763 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812470 4763 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812483 4763 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812497 4763 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812511 4763 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812523 4763 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812532 4763 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812541 4763 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812549 4763 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812558 4763 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812566 4763 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812575 4763 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812583 4763 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812591 4763 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812599 4763 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812606 4763 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812614 4763 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812622 4763 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812630 4763 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812638 4763 feature_gate.go:330] unrecognized feature gate: Example Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812645 4763 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.812660 4763 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813157 4763 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813175 4763 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813184 4763 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813195 4763 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813208 4763 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813219 4763 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813228 4763 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813238 4763 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813247 4763 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813256 4763 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813264 4763 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813273 4763 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813280 4763 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813291 4763 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813301 4763 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813311 4763 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813320 4763 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813329 4763 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813337 4763 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813345 4763 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813354 4763 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813362 4763 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813369 4763 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813378 4763 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813385 4763 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813394 4763 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813402 4763 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813412 4763 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813421 4763 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813429 4763 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813438 4763 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813447 4763 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813455 4763 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813463 4763 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813473 4763 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813482 4763 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813490 4763 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813497 4763 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813505 4763 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813514 4763 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813521 4763 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813530 4763 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813538 4763 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813547 4763 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813555 4763 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813563 4763 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813572 4763 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813579 4763 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813587 4763 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813595 4763 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813603 4763 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813611 4763 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813621 4763 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813631 4763 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813640 4763 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813649 4763 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813658 4763 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813666 4763 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813674 4763 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813682 4763 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813689 4763 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813719 4763 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813727 4763 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813735 4763 feature_gate.go:330] unrecognized feature gate: Example Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813743 4763 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813787 4763 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813795 4763 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813803 4763 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813811 4763 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813819 4763 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813827 4763 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.813839 4763 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.814201 4763 server.go:940] "Client rotation is on, will bootstrap in background" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.820874 4763 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.821042 4763 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.822959 4763 server.go:997] "Starting client certificate rotation" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.823012 4763 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.823289 4763 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-20 22:42:00.626152106 +0000 UTC Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.823487 4763 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.856979 4763 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.860962 4763 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 14:54:40 crc kubenswrapper[4763]: E0131 14:54:40.861413 4763 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.881436 4763 log.go:25] "Validated CRI v1 runtime API" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.923581 4763 log.go:25] "Validated CRI v1 image API" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.925834 4763 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.932594 4763 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-31-14-50-26-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.932643 4763 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.963815 4763 manager.go:217] Machine: {Timestamp:2026-01-31 14:54:40.960140973 +0000 UTC m=+0.714879356 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:dae69c69-4f41-4a04-af59-12d21fa5088f BootID:b7852931-3d3a-417c-b1dc-4eae70947913 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:3d:f1:01 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:3d:f1:01 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:60:29:6d Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:c5:0c:c9 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:e1:e3:cc Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:c6:d8:c8 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:ce:15:bc:98:93:c1 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:c6:8a:2f:39:7e:c5 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.964289 4763 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.964458 4763 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.964688 4763 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.964876 4763 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.964915 4763 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.965112 4763 topology_manager.go:138] "Creating topology manager with none policy" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.965124 4763 container_manager_linux.go:303] "Creating device plugin manager" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.965608 4763 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.965640 4763 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.966292 4763 state_mem.go:36] "Initialized new in-memory state store" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.966370 4763 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.973554 4763 kubelet.go:418] "Attempting to sync node with API server" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.973580 4763 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.973603 4763 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.973614 4763 kubelet.go:324] "Adding apiserver pod source" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.973628 4763 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.977208 4763 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.977981 4763 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 31 14:54:40 crc kubenswrapper[4763]: E0131 14:54:40.978153 4763 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.977980 4763 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 31 14:54:40 crc kubenswrapper[4763]: E0131 14:54:40.978299 4763 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.978181 4763 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.980308 4763 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.982034 4763 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.982058 4763 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.982065 4763 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.982071 4763 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.982082 4763 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.982090 4763 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.982098 4763 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.982109 4763 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.982119 4763 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.982129 4763 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.982139 4763 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.982146 4763 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.982996 4763 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.983379 4763 server.go:1280] "Started kubelet" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.984491 4763 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.984518 4763 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.984577 4763 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 31 14:54:40 crc systemd[1]: Started Kubernetes Kubelet. Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.985539 4763 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.986751 4763 server.go:460] "Adding debug handlers to kubelet server" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.987459 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.987747 4763 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.987791 4763 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.987807 4763 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 31 14:54:40 crc kubenswrapper[4763]: E0131 14:54:40.987901 4763 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.987763 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 02:30:19.92375395 +0000 UTC Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.988060 4763 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.988940 4763 factory.go:55] Registering systemd factory Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.988974 4763 factory.go:221] Registration of the systemd container factory successfully Jan 31 14:54:40 crc kubenswrapper[4763]: E0131 14:54:40.991157 4763 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="200ms" Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.989229 4763 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 31 14:54:40 crc kubenswrapper[4763]: E0131 14:54:40.991384 4763 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.991441 4763 factory.go:153] Registering CRI-O factory Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.991464 4763 factory.go:221] Registration of the crio container factory successfully Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.993137 4763 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.993225 4763 factory.go:103] Registering Raw factory Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.993367 4763 manager.go:1196] Started watching for new ooms in manager Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.996735 4763 manager.go:319] Starting recovery of all containers Jan 31 14:54:41 crc kubenswrapper[4763]: E0131 14:54:41.002255 4763 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.177:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188fd88d892daaee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 14:54:40.983354094 +0000 UTC m=+0.738092387,LastTimestamp:2026-01-31 14:54:40.983354094 +0000 UTC m=+0.738092387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.011947 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012006 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012022 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012038 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012051 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012064 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012078 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012090 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012107 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012120 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012140 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012154 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012167 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012185 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012199 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012211 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012223 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012239 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012254 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012270 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012283 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012296 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012311 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012320 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012357 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012371 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012385 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012399 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012413 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012427 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012465 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012480 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012515 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012530 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012544 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012558 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012571 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012586 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012600 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012613 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.014927 4763 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.015034 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.015176 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.015383 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.015512 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.015620 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.015851 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.015949 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.016278 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.016398 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.016552 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.016655 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.016763 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.016899 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.016996 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.018184 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.018448 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.018940 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.019029 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.019109 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.019185 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.020195 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.020297 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.020382 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.020474 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.020555 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.020639 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.020748 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.020832 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.020919 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.021000 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.021072 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.021146 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.021221 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.021295 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.021373 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.021456 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.021541 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.021619 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.021716 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.021814 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.021900 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.021983 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.022065 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.022155 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.022234 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.022318 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.022397 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.022481 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.022564 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.022648 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.022757 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.022836 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.022912 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.022990 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.023063 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.023144 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.023218 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.023294 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.023370 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.023466 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.023581 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.023672 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.023782 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.023853 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.023921 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.023981 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.024053 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.024115 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.024177 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.024234 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.024295 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.024355 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.024412 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.024485 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.024561 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.024640 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.024815 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.024908 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.024987 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.025061 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.025136 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.025216 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.025302 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.025388 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.025452 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.025506 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.025558 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.025614 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.025667 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.025737 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.025801 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.025879 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.025988 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026073 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026144 4763 manager.go:324] Recovery completed Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026150 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026480 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026498 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026513 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026525 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026536 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026547 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026557 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026568 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026579 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026590 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026602 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026613 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026623 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026633 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026643 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026654 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026664 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026675 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026685 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026708 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026717 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026727 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026737 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026748 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026758 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026768 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026779 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026789 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026800 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026810 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026820 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026830 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026843 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026853 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026862 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026873 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026883 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026893 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026902 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026913 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026925 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026935 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026945 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026956 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026967 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026978 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026987 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026999 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027008 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027018 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027028 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027039 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027049 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027059 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027070 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027081 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027091 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027101 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027112 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027130 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027141 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027152 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027162 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027171 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027182 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027193 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027203 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027213 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027224 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027235 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027244 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027254 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027275 4763 reconstruct.go:97] "Volume reconstruction finished" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027283 4763 reconciler.go:26] "Reconciler: start to sync state" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.038565 4763 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.040425 4763 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.040457 4763 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.040482 4763 kubelet.go:2335] "Starting kubelet main sync loop" Jan 31 14:54:41 crc kubenswrapper[4763]: E0131 14:54:41.040522 4763 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.040623 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:41 crc kubenswrapper[4763]: W0131 14:54:41.044124 4763 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 31 14:54:41 crc kubenswrapper[4763]: E0131 14:54:41.044204 4763 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.047133 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.047165 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.047176 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.047972 4763 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.047987 4763 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.048006 4763 state_mem.go:36] "Initialized new in-memory state store" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.075245 4763 policy_none.go:49] "None policy: Start" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.076271 4763 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.076308 4763 state_mem.go:35] "Initializing new in-memory state store" Jan 31 14:54:41 crc kubenswrapper[4763]: E0131 14:54:41.088024 4763 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.124937 4763 manager.go:334] "Starting Device Plugin manager" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.124992 4763 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.125008 4763 server.go:79] "Starting device plugin registration server" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.125515 4763 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.125531 4763 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.125823 4763 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.125920 4763 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.125930 4763 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 31 14:54:41 crc kubenswrapper[4763]: E0131 14:54:41.137607 4763 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.141363 4763 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.141476 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.142523 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.142566 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.142579 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.142788 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.142953 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.143000 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.143905 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.143927 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.143936 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.144050 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.144105 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.144125 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.144375 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.144445 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.144479 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.145497 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.145536 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.145551 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.146144 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.146188 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.146198 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.146326 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.146449 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.146480 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.147245 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.147266 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.147274 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.147326 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.147348 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.147361 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.147386 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.147600 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.147627 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.147935 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.147959 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.147971 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.148177 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.148211 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.148273 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.148302 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.148320 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.149417 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.149452 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.149468 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:41 crc kubenswrapper[4763]: E0131 14:54:41.192365 4763 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="400ms" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.225951 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.227006 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.227067 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.227086 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.227128 4763 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 14:54:41 crc kubenswrapper[4763]: E0131 14:54:41.227786 4763 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.177:6443: connect: connection refused" node="crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.229232 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.229307 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.229345 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.229376 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.229404 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.229434 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.229465 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.229497 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.229573 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.229617 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.229668 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.229732 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.229774 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.229826 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.229865 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.331509 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.331617 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.331746 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.331809 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.331859 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.331905 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.331946 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.331941 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.331994 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.332020 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.332062 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.332042 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.331972 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.332099 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.331993 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.332109 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.332678 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.332167 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.332846 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.332875 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.332944 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.332894 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.333002 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.333142 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.332029 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.333059 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.333329 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.333402 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.333761 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.333883 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.427935 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.429741 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.429801 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.429820 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.429856 4763 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 14:54:41 crc kubenswrapper[4763]: E0131 14:54:41.430465 4763 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.177:6443: connect: connection refused" node="crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.481034 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.498869 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.522865 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.529105 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.532654 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: W0131 14:54:41.545182 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-f2dc4ef8dd1936996f421598f158e81d6e2fd49b7c2848d644d4df2ca35ad944 WatchSource:0}: Error finding container f2dc4ef8dd1936996f421598f158e81d6e2fd49b7c2848d644d4df2ca35ad944: Status 404 returned error can't find the container with id f2dc4ef8dd1936996f421598f158e81d6e2fd49b7c2848d644d4df2ca35ad944 Jan 31 14:54:41 crc kubenswrapper[4763]: W0131 14:54:41.561036 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-7eca7a880f14cf128c6b544a65025bd1182729357b3a714fa5dcfe8cd769b5ff WatchSource:0}: Error finding container 7eca7a880f14cf128c6b544a65025bd1182729357b3a714fa5dcfe8cd769b5ff: Status 404 returned error can't find the container with id 7eca7a880f14cf128c6b544a65025bd1182729357b3a714fa5dcfe8cd769b5ff Jan 31 14:54:41 crc kubenswrapper[4763]: E0131 14:54:41.593624 4763 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="800ms" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.830942 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.833062 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.833118 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.833132 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.833165 4763 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 14:54:41 crc kubenswrapper[4763]: E0131 14:54:41.833708 4763 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.177:6443: connect: connection refused" node="crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.985683 4763 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.988660 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 21:16:43.21550503 +0000 UTC Jan 31 14:54:42 crc kubenswrapper[4763]: I0131 14:54:42.050029 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"56f4eabf9c31133b3b1167a4f704d076770dfdeb48377d30b1cbdecd3f09aea8"} Jan 31 14:54:42 crc kubenswrapper[4763]: I0131 14:54:42.051242 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2d283ccf794b468b3f6c90bbb15e3f278f236a6cd071e253067dcb1b41561db7"} Jan 31 14:54:42 crc kubenswrapper[4763]: I0131 14:54:42.053640 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"7eca7a880f14cf128c6b544a65025bd1182729357b3a714fa5dcfe8cd769b5ff"} Jan 31 14:54:42 crc kubenswrapper[4763]: I0131 14:54:42.054896 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3b7f1e78114cf80c259c356353ecdc487963868489acd3ee46e8e8a71cf3128c"} Jan 31 14:54:42 crc kubenswrapper[4763]: I0131 14:54:42.055837 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f2dc4ef8dd1936996f421598f158e81d6e2fd49b7c2848d644d4df2ca35ad944"} Jan 31 14:54:42 crc kubenswrapper[4763]: W0131 14:54:42.068625 4763 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 31 14:54:42 crc kubenswrapper[4763]: E0131 14:54:42.068745 4763 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:54:42 crc kubenswrapper[4763]: W0131 14:54:42.095964 4763 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 31 14:54:42 crc kubenswrapper[4763]: E0131 14:54:42.096111 4763 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:54:42 crc kubenswrapper[4763]: W0131 14:54:42.097382 4763 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 31 14:54:42 crc kubenswrapper[4763]: E0131 14:54:42.097458 4763 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:54:42 crc kubenswrapper[4763]: W0131 14:54:42.362486 4763 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 31 14:54:42 crc kubenswrapper[4763]: E0131 14:54:42.362590 4763 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:54:42 crc kubenswrapper[4763]: E0131 14:54:42.394889 4763 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="1.6s" Jan 31 14:54:42 crc kubenswrapper[4763]: I0131 14:54:42.634432 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:42 crc kubenswrapper[4763]: I0131 14:54:42.637119 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:42 crc kubenswrapper[4763]: I0131 14:54:42.637184 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:42 crc kubenswrapper[4763]: I0131 14:54:42.637205 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:42 crc kubenswrapper[4763]: I0131 14:54:42.637247 4763 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 14:54:42 crc kubenswrapper[4763]: E0131 14:54:42.638048 4763 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.177:6443: connect: connection refused" node="crc" Jan 31 14:54:42 crc kubenswrapper[4763]: I0131 14:54:42.986287 4763 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 31 14:54:42 crc kubenswrapper[4763]: I0131 14:54:42.989420 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 22:24:26.794374122 +0000 UTC Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.041094 4763 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 14:54:43 crc kubenswrapper[4763]: E0131 14:54:43.042393 4763 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.064828 4763 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="9b969d8a316d58e6e57d70e05ba1213b54e8ce8ddb87cbdc9f387758d2d63ccb" exitCode=0 Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.065049 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.065835 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"9b969d8a316d58e6e57d70e05ba1213b54e8ce8ddb87cbdc9f387758d2d63ccb"} Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.067583 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.067629 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.067648 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.069265 4763 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6" exitCode=0 Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.069332 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6"} Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.069441 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.071429 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.071481 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.071503 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.075402 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098"} Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.075479 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063"} Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.075501 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579"} Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.075520 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f"} Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.075657 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.079806 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.079864 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.079892 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.081745 4763 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295" exitCode=0 Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.081864 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295"} Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.081877 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.083681 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.083773 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.083796 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.085105 4763 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc" exitCode=0 Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.085166 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc"} Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.085329 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.086719 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.086772 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.086808 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.087059 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.089527 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.089580 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.089597 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.985541 4763 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.989714 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 02:38:07.886693214 +0000 UTC Jan 31 14:54:43 crc kubenswrapper[4763]: E0131 14:54:43.996655 4763 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="3.2s" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.091048 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"841434ac2aa4cdb9dc1a36bb53d7b10fa7b1b70602a81d0a2aef23bd1ededc0a"} Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.091148 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"98b72a7d666db4ba071d18272e07a6c063ed4a68c874d76621958647a9ac43fd"} Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.091164 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a88a2346ee1adf65c2806d13514d8c012ea043785bf808123e8639f67f956f97"} Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.091275 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.092228 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.092256 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.092268 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.095234 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12"} Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.095271 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e"} Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.095282 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b"} Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.095293 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c"} Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.099781 4763 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f" exitCode=0 Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.099863 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f"} Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.100013 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.100941 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.101033 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.101048 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.105815 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.105844 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"dd4c38511fece2af6df3bb93ecff7c793bbf4320c7b78e9996fa88a8775d2752"} Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.105815 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.107004 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.107096 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.107124 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.107099 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.107208 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.107221 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:44 crc kubenswrapper[4763]: W0131 14:54:44.143421 4763 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 31 14:54:44 crc kubenswrapper[4763]: E0131 14:54:44.143502 4763 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.238756 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.239984 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.240037 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.240050 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.240083 4763 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 14:54:44 crc kubenswrapper[4763]: E0131 14:54:44.240579 4763 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.177:6443: connect: connection refused" node="crc" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.369289 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:54:44 crc kubenswrapper[4763]: W0131 14:54:44.407580 4763 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 31 14:54:44 crc kubenswrapper[4763]: E0131 14:54:44.407732 4763 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:54:44 crc kubenswrapper[4763]: W0131 14:54:44.434085 4763 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 31 14:54:44 crc kubenswrapper[4763]: E0131 14:54:44.434199 4763 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.990917 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 16:28:59.670206889 +0000 UTC Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.110654 4763 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd" exitCode=0 Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.110756 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd"} Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.110844 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.112108 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.112162 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.112182 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.117159 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b"} Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.117244 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.117307 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.117250 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.117350 4763 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.117419 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.118590 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.118636 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.118650 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.119083 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.119129 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.119147 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.119482 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.119524 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.119537 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.119740 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.119802 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.119840 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.394244 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.598269 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.737266 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.745738 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.991600 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 00:35:29.996977029 +0000 UTC Jan 31 14:54:46 crc kubenswrapper[4763]: I0131 14:54:46.123858 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931"} Jan 31 14:54:46 crc kubenswrapper[4763]: I0131 14:54:46.123907 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:46 crc kubenswrapper[4763]: I0131 14:54:46.123919 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e"} Jan 31 14:54:46 crc kubenswrapper[4763]: I0131 14:54:46.123944 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90"} Jan 31 14:54:46 crc kubenswrapper[4763]: I0131 14:54:46.124041 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:46 crc kubenswrapper[4763]: I0131 14:54:46.124175 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:54:46 crc kubenswrapper[4763]: I0131 14:54:46.125008 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:46 crc kubenswrapper[4763]: I0131 14:54:46.125051 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:46 crc kubenswrapper[4763]: I0131 14:54:46.125067 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:46 crc kubenswrapper[4763]: I0131 14:54:46.125222 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:46 crc kubenswrapper[4763]: I0131 14:54:46.125269 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:46 crc kubenswrapper[4763]: I0131 14:54:46.125288 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:46 crc kubenswrapper[4763]: I0131 14:54:46.992452 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 04:01:14.529493056 +0000 UTC Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.132306 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6"} Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.132389 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142"} Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.132589 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.132606 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.132739 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.134297 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.134338 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.134355 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.134355 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.134419 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.134440 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.135361 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.135405 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.135433 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.393312 4763 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.441409 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.443012 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.443063 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.443076 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.443110 4763 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.476294 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.992977 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 09:44:17.16274971 +0000 UTC Jan 31 14:54:48 crc kubenswrapper[4763]: I0131 14:54:48.134923 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:48 crc kubenswrapper[4763]: I0131 14:54:48.135101 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:48 crc kubenswrapper[4763]: I0131 14:54:48.136471 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:48 crc kubenswrapper[4763]: I0131 14:54:48.136516 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:48 crc kubenswrapper[4763]: I0131 14:54:48.136533 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:48 crc kubenswrapper[4763]: I0131 14:54:48.136636 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:48 crc kubenswrapper[4763]: I0131 14:54:48.136687 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:48 crc kubenswrapper[4763]: I0131 14:54:48.136738 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:48 crc kubenswrapper[4763]: I0131 14:54:48.993402 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 00:54:13.725015816 +0000 UTC Jan 31 14:54:49 crc kubenswrapper[4763]: I0131 14:54:49.148274 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 31 14:54:49 crc kubenswrapper[4763]: I0131 14:54:49.148511 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:49 crc kubenswrapper[4763]: I0131 14:54:49.149773 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:49 crc kubenswrapper[4763]: I0131 14:54:49.149804 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:49 crc kubenswrapper[4763]: I0131 14:54:49.149815 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:49 crc kubenswrapper[4763]: I0131 14:54:49.941058 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:54:49 crc kubenswrapper[4763]: I0131 14:54:49.941312 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:49 crc kubenswrapper[4763]: I0131 14:54:49.942668 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:49 crc kubenswrapper[4763]: I0131 14:54:49.942737 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:49 crc kubenswrapper[4763]: I0131 14:54:49.942753 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:49 crc kubenswrapper[4763]: I0131 14:54:49.994425 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 00:11:26.188769159 +0000 UTC Jan 31 14:54:50 crc kubenswrapper[4763]: I0131 14:54:50.994753 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 15:10:36.532020552 +0000 UTC Jan 31 14:54:51 crc kubenswrapper[4763]: E0131 14:54:51.138634 4763 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 31 14:54:51 crc kubenswrapper[4763]: I0131 14:54:51.833405 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 31 14:54:51 crc kubenswrapper[4763]: I0131 14:54:51.833760 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:51 crc kubenswrapper[4763]: I0131 14:54:51.835457 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:51 crc kubenswrapper[4763]: I0131 14:54:51.835533 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:51 crc kubenswrapper[4763]: I0131 14:54:51.835555 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:51 crc kubenswrapper[4763]: I0131 14:54:51.994911 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 03:33:51.370631304 +0000 UTC Jan 31 14:54:52 crc kubenswrapper[4763]: I0131 14:54:52.995422 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 11:30:27.725424238 +0000 UTC Jan 31 14:54:53 crc kubenswrapper[4763]: I0131 14:54:53.078547 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:54:53 crc kubenswrapper[4763]: I0131 14:54:53.079515 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:53 crc kubenswrapper[4763]: I0131 14:54:53.081284 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:53 crc kubenswrapper[4763]: I0131 14:54:53.081349 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:53 crc kubenswrapper[4763]: I0131 14:54:53.081370 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:53 crc kubenswrapper[4763]: I0131 14:54:53.086850 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:54:53 crc kubenswrapper[4763]: I0131 14:54:53.148859 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:53 crc kubenswrapper[4763]: I0131 14:54:53.150229 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:53 crc kubenswrapper[4763]: I0131 14:54:53.150296 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:53 crc kubenswrapper[4763]: I0131 14:54:53.150316 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:53 crc kubenswrapper[4763]: I0131 14:54:53.996430 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 22:10:29.0394643 +0000 UTC Jan 31 14:54:54 crc kubenswrapper[4763]: W0131 14:54:54.805269 4763 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 31 14:54:54 crc kubenswrapper[4763]: I0131 14:54:54.805430 4763 trace.go:236] Trace[1269472654]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 14:54:44.803) (total time: 10001ms): Jan 31 14:54:54 crc kubenswrapper[4763]: Trace[1269472654]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:54:54.805) Jan 31 14:54:54 crc kubenswrapper[4763]: Trace[1269472654]: [10.001634467s] [10.001634467s] END Jan 31 14:54:54 crc kubenswrapper[4763]: E0131 14:54:54.805468 4763 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 31 14:54:54 crc kubenswrapper[4763]: I0131 14:54:54.973208 4763 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 31 14:54:54 crc kubenswrapper[4763]: I0131 14:54:54.973312 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 31 14:54:54 crc kubenswrapper[4763]: I0131 14:54:54.980183 4763 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 31 14:54:54 crc kubenswrapper[4763]: I0131 14:54:54.980265 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 31 14:54:54 crc kubenswrapper[4763]: I0131 14:54:54.996902 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 13:15:36.086558246 +0000 UTC Jan 31 14:54:55 crc kubenswrapper[4763]: I0131 14:54:55.997059 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 10:20:58.127190937 +0000 UTC Jan 31 14:54:56 crc kubenswrapper[4763]: I0131 14:54:56.078677 4763 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 14:54:56 crc kubenswrapper[4763]: I0131 14:54:56.078829 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 14:54:56 crc kubenswrapper[4763]: I0131 14:54:56.997445 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 00:06:22.663350072 +0000 UTC Jan 31 14:54:57 crc kubenswrapper[4763]: I0131 14:54:57.478217 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:54:57 crc kubenswrapper[4763]: I0131 14:54:57.478373 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:57 crc kubenswrapper[4763]: I0131 14:54:57.479226 4763 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 31 14:54:57 crc kubenswrapper[4763]: I0131 14:54:57.479324 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 31 14:54:57 crc kubenswrapper[4763]: I0131 14:54:57.479725 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:57 crc kubenswrapper[4763]: I0131 14:54:57.479842 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:57 crc kubenswrapper[4763]: I0131 14:54:57.479912 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:57 crc kubenswrapper[4763]: I0131 14:54:57.485202 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:54:57 crc kubenswrapper[4763]: I0131 14:54:57.998441 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 22:41:34.651640572 +0000 UTC Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.026255 4763 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.162759 4763 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.163098 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.985810 4763 apiserver.go:52] "Watching apiserver" Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.990640 4763 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.991093 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.991572 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.991726 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.991758 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:54:58 crc kubenswrapper[4763]: E0131 14:54:58.992066 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:54:58 crc kubenswrapper[4763]: E0131 14:54:58.991967 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.992241 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.992590 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:54:58 crc kubenswrapper[4763]: E0131 14:54:58.992739 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.992360 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.994443 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.995039 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.996450 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.997185 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.997360 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.997273 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.997589 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.997885 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.997942 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.998642 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 18:00:11.036378911 +0000 UTC Jan 31 14:54:59 crc kubenswrapper[4763]: I0131 14:54:59.065500 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:54:59 crc kubenswrapper[4763]: I0131 14:54:59.081130 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:54:59 crc kubenswrapper[4763]: I0131 14:54:59.088566 4763 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 31 14:54:59 crc kubenswrapper[4763]: I0131 14:54:59.094932 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:54:59 crc kubenswrapper[4763]: I0131 14:54:59.105663 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:54:59 crc kubenswrapper[4763]: I0131 14:54:59.115356 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:54:59 crc kubenswrapper[4763]: I0131 14:54:59.125133 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:54:59 crc kubenswrapper[4763]: I0131 14:54:59.135778 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:54:59 crc kubenswrapper[4763]: I0131 14:54:59.145196 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:54:59 crc kubenswrapper[4763]: I0131 14:54:59.164915 4763 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 31 14:54:59 crc kubenswrapper[4763]: I0131 14:54:59.164966 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 31 14:54:59 crc kubenswrapper[4763]: E0131 14:54:59.955473 4763 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 31 14:54:59 crc kubenswrapper[4763]: I0131 14:54:59.957787 4763 trace.go:236] Trace[561870413]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 14:54:48.441) (total time: 11516ms): Jan 31 14:54:59 crc kubenswrapper[4763]: Trace[561870413]: ---"Objects listed" error: 11516ms (14:54:59.957) Jan 31 14:54:59 crc kubenswrapper[4763]: Trace[561870413]: [11.516350693s] [11.516350693s] END Jan 31 14:54:59 crc kubenswrapper[4763]: I0131 14:54:59.958065 4763 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 31 14:54:59 crc kubenswrapper[4763]: I0131 14:54:59.958543 4763 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 31 14:54:59 crc kubenswrapper[4763]: E0131 14:54:59.959541 4763 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 31 14:54:59 crc kubenswrapper[4763]: I0131 14:54:59.961412 4763 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 31 14:54:59 crc kubenswrapper[4763]: I0131 14:54:59.966241 4763 trace.go:236] Trace[996093763]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 14:54:49.607) (total time: 10358ms): Jan 31 14:54:59 crc kubenswrapper[4763]: Trace[996093763]: ---"Objects listed" error: 10358ms (14:54:59.966) Jan 31 14:54:59 crc kubenswrapper[4763]: Trace[996093763]: [10.358442512s] [10.358442512s] END Jan 31 14:54:59 crc kubenswrapper[4763]: I0131 14:54:59.966269 4763 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 31 14:54:59 crc kubenswrapper[4763]: I0131 14:54:59.982414 4763 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 31 14:54:59 crc kubenswrapper[4763]: I0131 14:54:59.998899 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 22:46:39.324159494 +0000 UTC Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.008089 4763 csr.go:261] certificate signing request csr-vnnb4 is approved, waiting to be issued Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.014217 4763 csr.go:257] certificate signing request csr-vnnb4 is issued Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.062545 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.062862 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.063026 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.063174 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.063277 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.063112 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.063331 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.063538 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.063655 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.063773 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.063792 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.063860 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.063885 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.063903 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.063918 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.063936 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.063980 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.063997 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.064014 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.064799 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.064893 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.064937 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065190 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065246 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065327 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065376 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065405 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065426 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065447 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065474 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065497 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065516 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065567 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065592 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065594 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065616 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065636 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065658 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065681 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065719 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065742 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065750 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065765 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065790 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065812 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065833 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065855 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065856 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065876 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065929 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065967 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066004 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066030 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066048 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066089 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066119 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066129 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066205 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066210 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066247 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066284 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066321 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066342 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066370 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066409 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066454 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066457 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066507 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066543 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066564 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066589 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066615 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066643 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066667 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066672 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066718 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066732 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066798 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066836 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066870 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066886 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066937 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066975 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067016 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067057 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067063 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067086 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067108 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067146 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067191 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067233 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067274 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067297 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067313 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067327 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067452 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067487 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067534 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067550 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067573 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067607 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067651 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067676 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067718 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067755 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067791 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067886 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067900 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067959 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.068079 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.068137 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.068178 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.068212 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.068216 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.068288 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.068626 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.068812 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.068452 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.068371 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.068611 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.068874 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.070273 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.071476 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.070965 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.071686 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.071766 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.071681 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.071927 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.072158 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.072200 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.081895 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.081935 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.082110 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.069079 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.082274 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.082429 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.082571 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.082969 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083033 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.083065 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:55:00.583043943 +0000 UTC m=+20.337782236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083098 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083126 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083146 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083164 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083182 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083202 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083221 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083240 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083239 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083259 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083278 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083296 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083316 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083348 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083368 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083387 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083405 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083423 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083440 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083458 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083476 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083493 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083510 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083527 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083547 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083569 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083594 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083620 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083638 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083658 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083680 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083731 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083749 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083766 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083784 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083820 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083839 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083856 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083872 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083888 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083907 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083928 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083952 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083970 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083986 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084003 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084020 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084037 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084056 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084077 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084107 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084134 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084160 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084185 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084209 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084235 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084261 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084285 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084313 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084335 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084413 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084438 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084460 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084482 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084504 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084529 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084555 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084574 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084614 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084631 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084647 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084665 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084680 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085228 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085250 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085267 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085284 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085301 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085321 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085343 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085368 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085390 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085414 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085436 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085454 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085471 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085488 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085506 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085525 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085562 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085579 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085595 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085611 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085631 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085650 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085668 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085686 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085724 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085741 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085762 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085780 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085798 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085820 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085869 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085910 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085934 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085957 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085976 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085995 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086041 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086070 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086094 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086121 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086142 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086161 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086186 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086206 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086222 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086242 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086259 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086279 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086316 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086336 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086417 4763 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086429 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086441 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086452 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086462 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086473 4763 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086483 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086495 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086506 4763 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086515 4763 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086524 4763 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086535 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086544 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086553 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086563 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086572 4763 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086581 4763 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086590 4763 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086600 4763 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086609 4763 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086619 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086628 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086638 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086649 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086659 4763 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086671 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086681 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086705 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086715 4763 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086727 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086736 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086744 4763 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086754 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086763 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086773 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086782 4763 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086792 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086800 4763 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086812 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086826 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086838 4763 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086850 4763 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086862 4763 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086875 4763 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086884 4763 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086895 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086905 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086914 4763 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086924 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086933 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086942 4763 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086951 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086960 4763 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.094038 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.094813 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.096802 4763 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083258 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083288 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083420 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083560 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.109605 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083712 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083830 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084046 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084235 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084256 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084416 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084508 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084595 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084910 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085142 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085714 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085903 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.092131 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.091968 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.092538 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.093578 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.093954 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.094099 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.094162 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.094310 4763 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.094378 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.094405 4763 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.094570 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.094651 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.094783 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.094912 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.094967 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.095170 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.095190 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.095518 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.096157 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.096343 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.097460 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.097945 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.098116 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.098171 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.098680 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.098935 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.099225 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.099876 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.100055 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.100232 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.100396 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.100555 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.106589 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.107025 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.107319 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.107578 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.107715 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.107941 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.107940 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.108178 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.108391 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.108732 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.108734 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.108860 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.108952 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.109354 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.109678 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.110330 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:00.610307904 +0000 UTC m=+20.365046197 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.110381 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:00.610371976 +0000 UTC m=+20.365110269 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.111571 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.111733 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.111839 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.111888 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.111911 4763 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.112334 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.112396 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.112512 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.112682 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.112876 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.112895 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.112995 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:00.612967986 +0000 UTC m=+20.367706279 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.113254 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.113461 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.113574 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.113504 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.113846 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.113923 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.113960 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.114134 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.114162 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.114828 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.114851 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.114845 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.114170 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.115178 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.115377 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.115530 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.115810 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.115833 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.115999 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.116255 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.116315 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.116533 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.116623 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.116833 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.117009 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.117234 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.117952 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.118095 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.118361 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.118418 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.109210 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-ghn8r"] Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.118759 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.119144 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.119170 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.119197 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-qcb97"] Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.119317 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-ghn8r" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.119361 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.119548 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.119731 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.119988 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.120063 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.120155 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-qcb97" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.120547 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.120553 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.120969 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.121937 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.122312 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.122539 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.123522 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.123529 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.123620 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.124065 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.124177 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.124283 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.129045 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.124836 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.124871 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.125037 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.131006 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.131671 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.131717 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.131733 4763 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.132798 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:00.632777746 +0000 UTC m=+20.387516039 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.136175 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.136299 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.136563 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.137364 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.137598 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.138115 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.138979 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.139095 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.139158 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.139335 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.139681 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.141339 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.141976 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.142355 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.143095 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.143280 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.143838 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.143838 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.144470 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.144989 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.145399 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.146455 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.146684 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.146940 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.147776 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.148139 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.151327 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.154821 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.154946 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.156506 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.164297 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.167717 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218660 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218719 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/69a7de9b-f4a3-408b-8b12-570db6fcd84f-hosts-file\") pod \"node-resolver-qcb97\" (UID: \"69a7de9b-f4a3-408b-8b12-570db6fcd84f\") " pod="openshift-dns/node-resolver-qcb97" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218738 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzpgh\" (UniqueName: \"kubernetes.io/projected/69a7de9b-f4a3-408b-8b12-570db6fcd84f-kube-api-access-gzpgh\") pod \"node-resolver-qcb97\" (UID: \"69a7de9b-f4a3-408b-8b12-570db6fcd84f\") " pod="openshift-dns/node-resolver-qcb97" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218777 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/82ab6d11-5754-4903-ac36-bb0279dfa1fa-serviceca\") pod \"node-ca-ghn8r\" (UID: \"82ab6d11-5754-4903-ac36-bb0279dfa1fa\") " pod="openshift-image-registry/node-ca-ghn8r" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218795 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlrxh\" (UniqueName: \"kubernetes.io/projected/82ab6d11-5754-4903-ac36-bb0279dfa1fa-kube-api-access-vlrxh\") pod \"node-ca-ghn8r\" (UID: \"82ab6d11-5754-4903-ac36-bb0279dfa1fa\") " pod="openshift-image-registry/node-ca-ghn8r" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218821 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218835 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/82ab6d11-5754-4903-ac36-bb0279dfa1fa-host\") pod \"node-ca-ghn8r\" (UID: \"82ab6d11-5754-4903-ac36-bb0279dfa1fa\") " pod="openshift-image-registry/node-ca-ghn8r" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218893 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218906 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218915 4763 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218924 4763 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218932 4763 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218941 4763 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218949 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218958 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218966 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218974 4763 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218982 4763 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218990 4763 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218999 4763 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219007 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219018 4763 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219027 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219036 4763 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219045 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219053 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219062 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219070 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219078 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219086 4763 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219094 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219101 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219109 4763 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219117 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219124 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219132 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219140 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219148 4763 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219155 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219163 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219172 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219179 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219186 4763 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219194 4763 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219202 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219210 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219217 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219225 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219232 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219240 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219248 4763 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219256 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219264 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219273 4763 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219283 4763 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219291 4763 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219298 4763 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219306 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219314 4763 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219322 4763 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219330 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219338 4763 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219346 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219354 4763 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219361 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219369 4763 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219377 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219385 4763 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219393 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219401 4763 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219409 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219417 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219424 4763 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219432 4763 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219440 4763 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219448 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219456 4763 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219465 4763 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219473 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219481 4763 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219489 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219496 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219504 4763 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219512 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219522 4763 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219531 4763 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219539 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219547 4763 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219555 4763 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219563 4763 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219573 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219580 4763 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219588 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219596 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219605 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219613 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219621 4763 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219629 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219637 4763 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219645 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219653 4763 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219661 4763 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219669 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219676 4763 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219685 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219709 4763 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219718 4763 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219726 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219734 4763 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219743 4763 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219750 4763 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219758 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219766 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219773 4763 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219781 4763 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219789 4763 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219797 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219804 4763 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219813 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219820 4763 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219828 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219837 4763 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219844 4763 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219852 4763 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219859 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219867 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219875 4763 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219882 4763 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219890 4763 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219898 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219906 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219913 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219921 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219929 4763 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219937 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219945 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219953 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219960 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219968 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219976 4763 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219994 4763 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.220002 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.220009 4763 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.220017 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.220025 4763 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.220033 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.220041 4763 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.220050 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.220185 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.220236 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.221214 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.223167 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.224850 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.230670 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.236241 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.248992 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.280756 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.300080 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.320412 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlrxh\" (UniqueName: \"kubernetes.io/projected/82ab6d11-5754-4903-ac36-bb0279dfa1fa-kube-api-access-vlrxh\") pod \"node-ca-ghn8r\" (UID: \"82ab6d11-5754-4903-ac36-bb0279dfa1fa\") " pod="openshift-image-registry/node-ca-ghn8r" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.320470 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/82ab6d11-5754-4903-ac36-bb0279dfa1fa-host\") pod \"node-ca-ghn8r\" (UID: \"82ab6d11-5754-4903-ac36-bb0279dfa1fa\") " pod="openshift-image-registry/node-ca-ghn8r" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.320517 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/69a7de9b-f4a3-408b-8b12-570db6fcd84f-hosts-file\") pod \"node-resolver-qcb97\" (UID: \"69a7de9b-f4a3-408b-8b12-570db6fcd84f\") " pod="openshift-dns/node-resolver-qcb97" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.320536 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzpgh\" (UniqueName: \"kubernetes.io/projected/69a7de9b-f4a3-408b-8b12-570db6fcd84f-kube-api-access-gzpgh\") pod \"node-resolver-qcb97\" (UID: \"69a7de9b-f4a3-408b-8b12-570db6fcd84f\") " pod="openshift-dns/node-resolver-qcb97" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.320559 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/82ab6d11-5754-4903-ac36-bb0279dfa1fa-serviceca\") pod \"node-ca-ghn8r\" (UID: \"82ab6d11-5754-4903-ac36-bb0279dfa1fa\") " pod="openshift-image-registry/node-ca-ghn8r" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.320829 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/82ab6d11-5754-4903-ac36-bb0279dfa1fa-host\") pod \"node-ca-ghn8r\" (UID: \"82ab6d11-5754-4903-ac36-bb0279dfa1fa\") " pod="openshift-image-registry/node-ca-ghn8r" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.321188 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/69a7de9b-f4a3-408b-8b12-570db6fcd84f-hosts-file\") pod \"node-resolver-qcb97\" (UID: \"69a7de9b-f4a3-408b-8b12-570db6fcd84f\") " pod="openshift-dns/node-resolver-qcb97" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.321710 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/82ab6d11-5754-4903-ac36-bb0279dfa1fa-serviceca\") pod \"node-ca-ghn8r\" (UID: \"82ab6d11-5754-4903-ac36-bb0279dfa1fa\") " pod="openshift-image-registry/node-ca-ghn8r" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.339985 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.340277 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlrxh\" (UniqueName: \"kubernetes.io/projected/82ab6d11-5754-4903-ac36-bb0279dfa1fa-kube-api-access-vlrxh\") pod \"node-ca-ghn8r\" (UID: \"82ab6d11-5754-4903-ac36-bb0279dfa1fa\") " pod="openshift-image-registry/node-ca-ghn8r" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.341366 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzpgh\" (UniqueName: \"kubernetes.io/projected/69a7de9b-f4a3-408b-8b12-570db6fcd84f-kube-api-access-gzpgh\") pod \"node-resolver-qcb97\" (UID: \"69a7de9b-f4a3-408b-8b12-570db6fcd84f\") " pod="openshift-dns/node-resolver-qcb97" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.356008 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.372871 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.384126 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.390771 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.401510 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.423688 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.433741 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.454241 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.455477 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-ghn8r" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.457932 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-qcb97" Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.481726 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69a7de9b_f4a3_408b_8b12_570db6fcd84f.slice/crio-da140469ad6988e8d25c0e3674b0f930ac0ac71918a29cd12475c0d7eab891d3 WatchSource:0}: Error finding container da140469ad6988e8d25c0e3674b0f930ac0ac71918a29cd12475c0d7eab891d3: Status 404 returned error can't find the container with id da140469ad6988e8d25c0e3674b0f930ac0ac71918a29cd12475c0d7eab891d3 Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.487067 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82ab6d11_5754_4903_ac36_bb0279dfa1fa.slice/crio-1873317d9abd52bf61e30e7a2e558923661088f717636c4e0502de37adebaa17 WatchSource:0}: Error finding container 1873317d9abd52bf61e30e7a2e558923661088f717636c4e0502de37adebaa17: Status 404 returned error can't find the container with id 1873317d9abd52bf61e30e7a2e558923661088f717636c4e0502de37adebaa17 Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.503040 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-npvkf"] Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.503806 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-qzkhg"] Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.503931 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.504683 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-9wp2x"] Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.504883 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.505821 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.508916 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.509099 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.509253 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.509447 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.509594 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.509632 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.509889 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.509931 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.510040 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.513654 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.514425 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.514632 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.521589 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-host-var-lib-cni-multus\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.521626 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-etc-kubernetes\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.521647 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-host-run-netns\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.521679 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-cnibin\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.521716 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2335d04f-10b2-4cf8-aae6-236650539c74-cni-binary-copy\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.521738 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-host-run-k8s-cni-cncf-io\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.521757 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/081252dc-3eaa-4608-8b06-16c377dff2e7-system-cni-dir\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.521899 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d1f3628-a7fe-4094-a313-96c0469fcf78-mcd-auth-proxy-config\") pod \"machine-config-daemon-9wp2x\" (UID: \"9d1f3628-a7fe-4094-a313-96c0469fcf78\") " pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.521943 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tglt\" (UniqueName: \"kubernetes.io/projected/081252dc-3eaa-4608-8b06-16c377dff2e7-kube-api-access-4tglt\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.521978 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-multus-socket-dir-parent\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.522002 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-host-var-lib-cni-bin\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.522028 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9d1f3628-a7fe-4094-a313-96c0469fcf78-rootfs\") pod \"machine-config-daemon-9wp2x\" (UID: \"9d1f3628-a7fe-4094-a313-96c0469fcf78\") " pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.522051 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/081252dc-3eaa-4608-8b06-16c377dff2e7-cnibin\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.522075 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2335d04f-10b2-4cf8-aae6-236650539c74-multus-daemon-config\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.522098 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/081252dc-3eaa-4608-8b06-16c377dff2e7-os-release\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.522121 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-host-var-lib-kubelet\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.522169 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-multus-cni-dir\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.522191 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh4pm\" (UniqueName: \"kubernetes.io/projected/2335d04f-10b2-4cf8-aae6-236650539c74-kube-api-access-zh4pm\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.522212 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/081252dc-3eaa-4608-8b06-16c377dff2e7-cni-binary-copy\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.522242 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-os-release\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.522262 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-hostroot\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.522288 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/081252dc-3eaa-4608-8b06-16c377dff2e7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.522325 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-multus-conf-dir\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.522365 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/081252dc-3eaa-4608-8b06-16c377dff2e7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.522388 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkx2t\" (UniqueName: \"kubernetes.io/projected/9d1f3628-a7fe-4094-a313-96c0469fcf78-kube-api-access-pkx2t\") pod \"machine-config-daemon-9wp2x\" (UID: \"9d1f3628-a7fe-4094-a313-96c0469fcf78\") " pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.522422 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d1f3628-a7fe-4094-a313-96c0469fcf78-proxy-tls\") pod \"machine-config-daemon-9wp2x\" (UID: \"9d1f3628-a7fe-4094-a313-96c0469fcf78\") " pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.522445 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-system-cni-dir\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.522470 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-host-run-multus-certs\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.523547 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.540828 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.563214 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.574118 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.585111 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.598019 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.607771 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.621407 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623005 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623086 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623111 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-host-var-lib-kubelet\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623131 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623149 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-multus-cni-dir\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623163 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh4pm\" (UniqueName: \"kubernetes.io/projected/2335d04f-10b2-4cf8-aae6-236650539c74-kube-api-access-zh4pm\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623177 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/081252dc-3eaa-4608-8b06-16c377dff2e7-cni-binary-copy\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623193 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-os-release\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623207 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-hostroot\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623222 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/081252dc-3eaa-4608-8b06-16c377dff2e7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623259 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-multus-conf-dir\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623274 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/081252dc-3eaa-4608-8b06-16c377dff2e7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623291 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623307 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkx2t\" (UniqueName: \"kubernetes.io/projected/9d1f3628-a7fe-4094-a313-96c0469fcf78-kube-api-access-pkx2t\") pod \"machine-config-daemon-9wp2x\" (UID: \"9d1f3628-a7fe-4094-a313-96c0469fcf78\") " pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623323 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d1f3628-a7fe-4094-a313-96c0469fcf78-proxy-tls\") pod \"machine-config-daemon-9wp2x\" (UID: \"9d1f3628-a7fe-4094-a313-96c0469fcf78\") " pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623340 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-system-cni-dir\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623360 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-host-run-multus-certs\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623376 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-host-run-netns\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623391 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-host-var-lib-cni-multus\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623404 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-etc-kubernetes\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623420 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-cnibin\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623433 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2335d04f-10b2-4cf8-aae6-236650539c74-cni-binary-copy\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623448 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-host-run-k8s-cni-cncf-io\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623450 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-hostroot\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623464 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/081252dc-3eaa-4608-8b06-16c377dff2e7-system-cni-dir\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623502 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/081252dc-3eaa-4608-8b06-16c377dff2e7-system-cni-dir\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623516 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d1f3628-a7fe-4094-a313-96c0469fcf78-mcd-auth-proxy-config\") pod \"machine-config-daemon-9wp2x\" (UID: \"9d1f3628-a7fe-4094-a313-96c0469fcf78\") " pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623536 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tglt\" (UniqueName: \"kubernetes.io/projected/081252dc-3eaa-4608-8b06-16c377dff2e7-kube-api-access-4tglt\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623542 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-host-var-lib-cni-multus\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623559 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9d1f3628-a7fe-4094-a313-96c0469fcf78-rootfs\") pod \"machine-config-daemon-9wp2x\" (UID: \"9d1f3628-a7fe-4094-a313-96c0469fcf78\") " pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623568 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-etc-kubernetes\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623582 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-multus-socket-dir-parent\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623599 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-host-var-lib-cni-bin\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623604 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-cnibin\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623615 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/081252dc-3eaa-4608-8b06-16c377dff2e7-cnibin\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623633 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2335d04f-10b2-4cf8-aae6-236650539c74-multus-daemon-config\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623626 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-host-run-netns\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623655 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-system-cni-dir\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623649 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/081252dc-3eaa-4608-8b06-16c377dff2e7-os-release\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623847 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/081252dc-3eaa-4608-8b06-16c377dff2e7-os-release\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623924 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-os-release\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623859 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-host-run-multus-certs\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.623997 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:55:01.62396762 +0000 UTC m=+21.378705913 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.624068 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-multus-cni-dir\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.624068 4763 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.624104 4763 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.624135 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:01.624126374 +0000 UTC m=+21.378864667 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.624152 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:01.624144575 +0000 UTC m=+21.378882868 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.624157 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-host-var-lib-kubelet\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.624189 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2335d04f-10b2-4cf8-aae6-236650539c74-cni-binary-copy\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.624233 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-host-run-k8s-cni-cncf-io\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.624381 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-multus-conf-dir\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.624479 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/081252dc-3eaa-4608-8b06-16c377dff2e7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.624580 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.624601 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.624614 4763 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.624623 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-host-var-lib-cni-bin\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.624642 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/081252dc-3eaa-4608-8b06-16c377dff2e7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.624654 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/081252dc-3eaa-4608-8b06-16c377dff2e7-cni-binary-copy\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.624664 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:01.624644358 +0000 UTC m=+21.379382821 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.624670 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-multus-socket-dir-parent\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.624707 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9d1f3628-a7fe-4094-a313-96c0469fcf78-rootfs\") pod \"machine-config-daemon-9wp2x\" (UID: \"9d1f3628-a7fe-4094-a313-96c0469fcf78\") " pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.624713 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/081252dc-3eaa-4608-8b06-16c377dff2e7-cnibin\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.625189 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d1f3628-a7fe-4094-a313-96c0469fcf78-mcd-auth-proxy-config\") pod \"machine-config-daemon-9wp2x\" (UID: \"9d1f3628-a7fe-4094-a313-96c0469fcf78\") " pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.625685 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2335d04f-10b2-4cf8-aae6-236650539c74-multus-daemon-config\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.628284 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d1f3628-a7fe-4094-a313-96c0469fcf78-proxy-tls\") pod \"machine-config-daemon-9wp2x\" (UID: \"9d1f3628-a7fe-4094-a313-96c0469fcf78\") " pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.639989 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.641061 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tglt\" (UniqueName: \"kubernetes.io/projected/081252dc-3eaa-4608-8b06-16c377dff2e7-kube-api-access-4tglt\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.641467 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkx2t\" (UniqueName: \"kubernetes.io/projected/9d1f3628-a7fe-4094-a313-96c0469fcf78-kube-api-access-pkx2t\") pod \"machine-config-daemon-9wp2x\" (UID: \"9d1f3628-a7fe-4094-a313-96c0469fcf78\") " pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.641670 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh4pm\" (UniqueName: \"kubernetes.io/projected/2335d04f-10b2-4cf8-aae6-236650539c74-kube-api-access-zh4pm\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.650782 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.674005 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.686069 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.695907 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.710584 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.720603 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.725345 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.725620 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.725657 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.725675 4763 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.725774 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:01.725746807 +0000 UTC m=+21.480485100 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.727963 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.737660 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.747052 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.755029 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.765076 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.774569 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.785225 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.818665 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.822850 4763 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823138 4763 reflector.go:484] object-"openshift-machine-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823202 4763 reflector.go:484] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823245 4763 reflector.go:484] object-"openshift-image-registry"/"image-registry-certificates": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"image-registry-certificates": Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823297 4763 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"cni-copy-resources": Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823451 4763 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823495 4763 reflector.go:484] object-"openshift-multus"/"default-cni-sysctl-allowlist": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"default-cni-sysctl-allowlist": Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823578 4763 reflector.go:484] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": watch of *v1.Secret ended with: very short watch: object-"openshift-image-registry"/"node-ca-dockercfg-4777p": Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823613 4763 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"default-dockercfg-2q5b6": Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823645 4763 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823671 4763 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823720 4763 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"multus-daemon-config": Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823745 4763 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823751 4763 reflector.go:484] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823771 4763 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823793 4763 reflector.go:484] object-"openshift-image-registry"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823812 4763 reflector.go:484] object-"openshift-machine-config-operator"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-rbac-proxy": Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823842 4763 reflector.go:484] object-"openshift-machine-config-operator"/"proxy-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"proxy-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823935 4763 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823981 4763 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823795 4763 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.824033 4763 reflector.go:484] object-"openshift-image-registry"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.824068 4763 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.835779 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod081252dc_3eaa_4608_8b06_16c377dff2e7.slice/crio-4226d3cba5c5a7097aab57c10efbec3cc6acd0eaf9923a086c6491bfae8420f3 WatchSource:0}: Error finding container 4226d3cba5c5a7097aab57c10efbec3cc6acd0eaf9923a086c6491bfae8420f3: Status 404 returned error can't find the container with id 4226d3cba5c5a7097aab57c10efbec3cc6acd0eaf9923a086c6491bfae8420f3 Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.835890 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.845327 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.850482 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2335d04f_10b2_4cf8_aae6_236650539c74.slice/crio-dc3cca4c513eb1e067e9109deb19ff574fc1649f8e97e78da007d65b15e333aa WatchSource:0}: Error finding container dc3cca4c513eb1e067e9109deb19ff574fc1649f8e97e78da007d65b15e333aa: Status 404 returned error can't find the container with id dc3cca4c513eb1e067e9109deb19ff574fc1649f8e97e78da007d65b15e333aa Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.868285 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dtknf"] Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.872502 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.875794 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.875954 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.876797 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.877132 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.877230 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.877154 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.877153 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.887941 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.901032 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.912220 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.924525 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.933077 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-kubelet\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.933553 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-run-openvswitch\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.933589 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/047ce610-09fa-482b-8d29-45ad376d12b3-ovn-node-metrics-cert\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.933747 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.933835 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/047ce610-09fa-482b-8d29-45ad376d12b3-ovnkube-script-lib\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.933961 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/047ce610-09fa-482b-8d29-45ad376d12b3-env-overrides\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.934074 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-systemd-units\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.934115 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-run-systemd\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.934147 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-node-log\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.934184 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-slash\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.934232 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-var-lib-openvswitch\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.934264 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-run-ovn-kubernetes\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.934296 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-cni-netd\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.934331 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxcsm\" (UniqueName: \"kubernetes.io/projected/047ce610-09fa-482b-8d29-45ad376d12b3-kube-api-access-rxcsm\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.934366 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-run-netns\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.934399 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-etc-openvswitch\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.934431 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-log-socket\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.934468 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/047ce610-09fa-482b-8d29-45ad376d12b3-ovnkube-config\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.934504 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-run-ovn\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.934534 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-cni-bin\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.936941 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.953007 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.961381 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.972484 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.982891 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.990323 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.998789 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.999854 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 22:12:50.380761955 +0000 UTC Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.013910 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.017785 4763 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-31 14:50:00 +0000 UTC, rotation deadline is 2026-12-10 12:48:20.161920154 +0000 UTC Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.017846 4763 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7509h53m19.144077308s for next certificate rotation Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.027412 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.034995 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-systemd-units\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035027 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-run-systemd\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035042 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-node-log\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035057 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-slash\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035079 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-var-lib-openvswitch\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035095 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-run-ovn-kubernetes\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035109 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-cni-netd\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035124 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxcsm\" (UniqueName: \"kubernetes.io/projected/047ce610-09fa-482b-8d29-45ad376d12b3-kube-api-access-rxcsm\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035139 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-run-netns\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035180 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-etc-openvswitch\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035195 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-log-socket\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035210 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/047ce610-09fa-482b-8d29-45ad376d12b3-ovnkube-config\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035225 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-run-ovn\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035239 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-cni-bin\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035233 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-run-systemd\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035272 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-kubelet\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035295 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-run-openvswitch\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035331 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/047ce610-09fa-482b-8d29-45ad376d12b3-ovn-node-metrics-cert\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035334 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-systemd-units\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035367 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035398 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-var-lib-openvswitch\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035402 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/047ce610-09fa-482b-8d29-45ad376d12b3-ovnkube-script-lib\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035501 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/047ce610-09fa-482b-8d29-45ad376d12b3-env-overrides\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035658 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-run-ovn-kubernetes\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035740 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-cni-netd\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035773 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-node-log\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035808 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-slash\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035843 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-run-ovn\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035874 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-run-netns\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035906 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-etc-openvswitch\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035943 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-log-socket\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035959 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-cni-bin\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.036043 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-kubelet\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.036085 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-run-openvswitch\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.036333 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/047ce610-09fa-482b-8d29-45ad376d12b3-env-overrides\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.036386 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.036979 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/047ce610-09fa-482b-8d29-45ad376d12b3-ovnkube-config\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.037554 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/047ce610-09fa-482b-8d29-45ad376d12b3-ovnkube-script-lib\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.041072 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.041088 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.041077 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:01 crc kubenswrapper[4763]: E0131 14:55:01.041368 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:01 crc kubenswrapper[4763]: E0131 14:55:01.041536 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:01 crc kubenswrapper[4763]: E0131 14:55:01.041660 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.042805 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/047ce610-09fa-482b-8d29-45ad376d12b3-ovn-node-metrics-cert\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.047847 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.048522 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.050129 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.050914 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.052046 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.052616 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.054065 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.054750 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.060104 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.062387 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.064257 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxcsm\" (UniqueName: \"kubernetes.io/projected/047ce610-09fa-482b-8d29-45ad376d12b3-kube-api-access-rxcsm\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.068264 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.069089 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.070111 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.070681 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.071684 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.072348 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.072955 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.073812 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.074458 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.070624 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.075204 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.078012 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.078740 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.081845 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.083480 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.084022 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.088442 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.089838 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.090442 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.091403 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.091967 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.095160 4763 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.095368 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.097067 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.098215 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.098718 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.100436 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.102585 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.103175 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.103961 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.104653 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.105491 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.106541 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.107209 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.111418 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.112233 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.113154 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.113757 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.114654 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.115076 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.115657 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.116664 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.117237 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.117766 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.118824 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.119481 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.120417 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.135459 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.167406 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.170995 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerStarted","Data":"39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb"} Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.171186 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerStarted","Data":"3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb"} Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.171249 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerStarted","Data":"93a82828283ccc16045851f8b34e6d568ee811e428f2d570bded649eae30abcd"} Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.175008 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-ghn8r" event={"ID":"82ab6d11-5754-4903-ac36-bb0279dfa1fa","Type":"ContainerStarted","Data":"f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d"} Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.175470 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-ghn8r" event={"ID":"82ab6d11-5754-4903-ac36-bb0279dfa1fa","Type":"ContainerStarted","Data":"1873317d9abd52bf61e30e7a2e558923661088f717636c4e0502de37adebaa17"} Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.183169 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.183341 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-qcb97" event={"ID":"69a7de9b-f4a3-408b-8b12-570db6fcd84f","Type":"ContainerStarted","Data":"874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655"} Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.183391 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-qcb97" event={"ID":"69a7de9b-f4a3-408b-8b12-570db6fcd84f","Type":"ContainerStarted","Data":"da140469ad6988e8d25c0e3674b0f930ac0ac71918a29cd12475c0d7eab891d3"} Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.186346 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355"} Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.186395 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e"} Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.186407 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d5eee5cbb2354072449a4933534633c6ba1ae66562f3fa6e91cdf8d8d36fd740"} Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.193363 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b"} Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.193411 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"84f3fd9a473f804a51034f94da7ed19a66a7e9fc3fc0ae0637f9c763fcf7a771"} Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.194863 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.200310 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qzkhg" event={"ID":"2335d04f-10b2-4cf8-aae6-236650539c74","Type":"ContainerStarted","Data":"e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947"} Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.200367 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qzkhg" event={"ID":"2335d04f-10b2-4cf8-aae6-236650539c74","Type":"ContainerStarted","Data":"dc3cca4c513eb1e067e9109deb19ff574fc1649f8e97e78da007d65b15e333aa"} Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.203926 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.204531 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" event={"ID":"081252dc-3eaa-4608-8b06-16c377dff2e7","Type":"ContainerStarted","Data":"a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021"} Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.204562 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" event={"ID":"081252dc-3eaa-4608-8b06-16c377dff2e7","Type":"ContainerStarted","Data":"4226d3cba5c5a7097aab57c10efbec3cc6acd0eaf9923a086c6491bfae8420f3"} Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.206056 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"0de505e43032dbbbc163fc9a504e070fa5e6f59ee0c408b0059d72965bbce8bd"} Jan 31 14:55:01 crc kubenswrapper[4763]: W0131 14:55:01.207228 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod047ce610_09fa_482b_8d29_45ad376d12b3.slice/crio-be99d205ca17bdd3dd3cbb28a994ab179c5742dad18ab9a3579a8c9f686ccdc3 WatchSource:0}: Error finding container be99d205ca17bdd3dd3cbb28a994ab179c5742dad18ab9a3579a8c9f686ccdc3: Status 404 returned error can't find the container with id be99d205ca17bdd3dd3cbb28a994ab179c5742dad18ab9a3579a8c9f686ccdc3 Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.249008 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.285734 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.333856 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.365036 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.408951 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.451028 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.486107 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.532414 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.564050 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.604020 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.641977 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.642110 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.642142 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:01 crc kubenswrapper[4763]: E0131 14:55:01.642176 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:55:03.642145027 +0000 UTC m=+23.396883340 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.642225 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:01 crc kubenswrapper[4763]: E0131 14:55:01.642250 4763 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:55:01 crc kubenswrapper[4763]: E0131 14:55:01.642304 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:03.642289201 +0000 UTC m=+23.397027504 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:55:01 crc kubenswrapper[4763]: E0131 14:55:01.642435 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:55:01 crc kubenswrapper[4763]: E0131 14:55:01.642455 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:55:01 crc kubenswrapper[4763]: E0131 14:55:01.642468 4763 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:01 crc kubenswrapper[4763]: E0131 14:55:01.642507 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:03.642497676 +0000 UTC m=+23.397236039 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:01 crc kubenswrapper[4763]: E0131 14:55:01.642551 4763 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:55:01 crc kubenswrapper[4763]: E0131 14:55:01.642579 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:03.642569088 +0000 UTC m=+23.397307471 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.653907 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.656743 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.707937 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.737209 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.743216 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:01 crc kubenswrapper[4763]: E0131 14:55:01.743343 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:55:01 crc kubenswrapper[4763]: E0131 14:55:01.743358 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:55:01 crc kubenswrapper[4763]: E0131 14:55:01.743370 4763 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:01 crc kubenswrapper[4763]: E0131 14:55:01.743419 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:03.74340666 +0000 UTC m=+23.498144953 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.763983 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.797151 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.826795 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.877634 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.893546 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.906197 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.906919 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.917580 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.937081 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.971453 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.997766 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.000026 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 11:37:28.768700965 +0000 UTC Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.017346 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.037715 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.067462 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.096785 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.125530 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.165623 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.177054 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.196656 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.203045 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.209837 4763 generic.go:334] "Generic (PLEG): container finished" podID="081252dc-3eaa-4608-8b06-16c377dff2e7" containerID="a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021" exitCode=0 Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.209905 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" event={"ID":"081252dc-3eaa-4608-8b06-16c377dff2e7","Type":"ContainerDied","Data":"a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021"} Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.211748 4763 generic.go:334] "Generic (PLEG): container finished" podID="047ce610-09fa-482b-8d29-45ad376d12b3" containerID="1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549" exitCode=0 Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.211822 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerDied","Data":"1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549"} Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.211873 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerStarted","Data":"be99d205ca17bdd3dd3cbb28a994ab179c5742dad18ab9a3579a8c9f686ccdc3"} Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.237529 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 31 14:55:02 crc kubenswrapper[4763]: E0131 14:55:02.278381 4763 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.287854 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.297440 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.317030 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.358441 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.385258 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.397419 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.417225 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.457475 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.480468 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.507822 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.544514 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.589012 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.627847 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.666491 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.712575 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.750727 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.786098 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.828770 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.864649 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.914629 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.956857 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.988802 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.001080 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 12:29:44.216048185 +0000 UTC Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.025179 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.040879 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.040964 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:03 crc kubenswrapper[4763]: E0131 14:55:03.041040 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:03 crc kubenswrapper[4763]: E0131 14:55:03.041109 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.040966 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:03 crc kubenswrapper[4763]: E0131 14:55:03.041197 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.064276 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.082793 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.087924 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.105806 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.124629 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.166560 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.207061 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.215436 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" event={"ID":"081252dc-3eaa-4608-8b06-16c377dff2e7","Type":"ContainerStarted","Data":"db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138"} Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.217471 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d"} Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.222734 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerStarted","Data":"2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d"} Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.222767 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerStarted","Data":"897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b"} Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.222777 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerStarted","Data":"c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453"} Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.222785 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerStarted","Data":"b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8"} Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.250393 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: E0131 14:55:03.279159 4763 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.325561 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.357136 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.384347 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.428993 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.469423 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.507105 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.551445 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.599370 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.629651 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.660803 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.660955 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.661007 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:03 crc kubenswrapper[4763]: E0131 14:55:03.661036 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:55:07.661003812 +0000 UTC m=+27.415742105 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.661087 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:03 crc kubenswrapper[4763]: E0131 14:55:03.661142 4763 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:55:03 crc kubenswrapper[4763]: E0131 14:55:03.661213 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:07.661191917 +0000 UTC m=+27.415930240 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:55:03 crc kubenswrapper[4763]: E0131 14:55:03.661246 4763 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:55:03 crc kubenswrapper[4763]: E0131 14:55:03.661417 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:07.661384992 +0000 UTC m=+27.416123325 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:55:03 crc kubenswrapper[4763]: E0131 14:55:03.661273 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:55:03 crc kubenswrapper[4763]: E0131 14:55:03.661477 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:55:03 crc kubenswrapper[4763]: E0131 14:55:03.661504 4763 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:03 crc kubenswrapper[4763]: E0131 14:55:03.661552 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:07.661538676 +0000 UTC m=+27.416277009 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.663469 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.709019 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.761151 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.762069 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:03 crc kubenswrapper[4763]: E0131 14:55:03.762340 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:55:03 crc kubenswrapper[4763]: E0131 14:55:03.762397 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:55:03 crc kubenswrapper[4763]: E0131 14:55:03.762421 4763 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:03 crc kubenswrapper[4763]: E0131 14:55:03.762503 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:07.762475291 +0000 UTC m=+27.517213614 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.792467 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.833435 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.868286 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.911527 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.952378 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.993790 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.002139 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 14:03:09.108046711 +0000 UTC Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.030617 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.073822 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.119296 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.148785 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.229944 4763 generic.go:334] "Generic (PLEG): container finished" podID="081252dc-3eaa-4608-8b06-16c377dff2e7" containerID="db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138" exitCode=0 Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.230023 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" event={"ID":"081252dc-3eaa-4608-8b06-16c377dff2e7","Type":"ContainerDied","Data":"db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138"} Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.235985 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerStarted","Data":"67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294"} Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.236028 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerStarted","Data":"92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e"} Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.265843 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.293083 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.313253 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.332376 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.344138 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.387379 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.426647 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.462711 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.506460 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.551667 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.595123 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.627121 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.668814 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.706311 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.746558 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.003667 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 10:15:59.249176626 +0000 UTC Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.041000 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.041151 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:05 crc kubenswrapper[4763]: E0131 14:55:05.041323 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.041931 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:05 crc kubenswrapper[4763]: E0131 14:55:05.042104 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:05 crc kubenswrapper[4763]: E0131 14:55:05.042207 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.243286 4763 generic.go:334] "Generic (PLEG): container finished" podID="081252dc-3eaa-4608-8b06-16c377dff2e7" containerID="99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226" exitCode=0 Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.243337 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" event={"ID":"081252dc-3eaa-4608-8b06-16c377dff2e7","Type":"ContainerDied","Data":"99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226"} Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.278352 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.294502 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.309362 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.323689 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.336665 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.364268 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.382043 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.402581 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.417410 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.428323 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.448567 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.463826 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.479170 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.490116 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.505007 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.004443 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 11:16:07.452523393 +0000 UTC Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.253044 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerStarted","Data":"3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b"} Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.257491 4763 generic.go:334] "Generic (PLEG): container finished" podID="081252dc-3eaa-4608-8b06-16c377dff2e7" containerID="c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830" exitCode=0 Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.257550 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" event={"ID":"081252dc-3eaa-4608-8b06-16c377dff2e7","Type":"ContainerDied","Data":"c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830"} Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.284672 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.301588 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.315476 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.333387 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.360270 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.365137 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.365197 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.365217 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.365391 4763 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.370654 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.374403 4763 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.374613 4763 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.375494 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.375541 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.375553 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.375569 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.375581 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:06Z","lastTransitionTime":"2026-01-31T14:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.392443 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: E0131 14:55:06.397653 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.401348 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.401377 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.401393 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.401413 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.401427 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:06Z","lastTransitionTime":"2026-01-31T14:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.412837 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: E0131 14:55:06.413871 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.417835 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.417878 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.417890 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.417905 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.417918 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:06Z","lastTransitionTime":"2026-01-31T14:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.426188 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: E0131 14:55:06.432426 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.436242 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.436284 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.436298 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.436320 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.436335 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:06Z","lastTransitionTime":"2026-01-31T14:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.438760 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: E0131 14:55:06.451196 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.452412 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.455402 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.455437 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.455450 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.455470 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.455481 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:06Z","lastTransitionTime":"2026-01-31T14:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.467753 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: E0131 14:55:06.469960 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: E0131 14:55:06.470178 4763 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.471796 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.471832 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.471846 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.471865 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.471882 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:06Z","lastTransitionTime":"2026-01-31T14:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.482332 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.495220 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.510193 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.521387 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.574380 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.574422 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.574433 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.574451 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.574463 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:06Z","lastTransitionTime":"2026-01-31T14:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.678774 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.678897 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.678973 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.679013 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.679039 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:06Z","lastTransitionTime":"2026-01-31T14:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.781392 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.781420 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.781428 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.781464 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.781474 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:06Z","lastTransitionTime":"2026-01-31T14:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.883977 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.884021 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.884035 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.884058 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.884072 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:06Z","lastTransitionTime":"2026-01-31T14:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.986780 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.986814 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.986826 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.986841 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.986856 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:06Z","lastTransitionTime":"2026-01-31T14:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.005133 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 18:26:33.7207247 +0000 UTC Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.041524 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.041609 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:07 crc kubenswrapper[4763]: E0131 14:55:07.041718 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.041754 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:07 crc kubenswrapper[4763]: E0131 14:55:07.041854 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:07 crc kubenswrapper[4763]: E0131 14:55:07.042019 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.089749 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.089813 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.089833 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.089863 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.089884 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:07Z","lastTransitionTime":"2026-01-31T14:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.192584 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.192615 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.192624 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.192638 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.192647 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:07Z","lastTransitionTime":"2026-01-31T14:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.266186 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" event={"ID":"081252dc-3eaa-4608-8b06-16c377dff2e7","Type":"ContainerStarted","Data":"bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073"} Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.293571 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.294400 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.294441 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.294450 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.294464 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.294474 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:07Z","lastTransitionTime":"2026-01-31T14:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.309939 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.323029 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.337443 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.347769 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.359664 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.377980 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.396976 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.397017 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.397031 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.397051 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.397066 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:07Z","lastTransitionTime":"2026-01-31T14:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.398350 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.408806 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.418270 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.429028 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.441740 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.459788 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.477380 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.490556 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.499466 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.499516 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.499533 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.499554 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.499569 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:07Z","lastTransitionTime":"2026-01-31T14:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.602580 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.602637 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.602682 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.602745 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.602768 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:07Z","lastTransitionTime":"2026-01-31T14:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.704254 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:55:07 crc kubenswrapper[4763]: E0131 14:55:07.704449 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:55:15.704395413 +0000 UTC m=+35.459133716 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.704576 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.704616 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.704659 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:07 crc kubenswrapper[4763]: E0131 14:55:07.704735 4763 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:55:07 crc kubenswrapper[4763]: E0131 14:55:07.704810 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:55:07 crc kubenswrapper[4763]: E0131 14:55:07.704828 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:55:07 crc kubenswrapper[4763]: E0131 14:55:07.704832 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:15.704809004 +0000 UTC m=+35.459547337 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:55:07 crc kubenswrapper[4763]: E0131 14:55:07.704842 4763 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:07 crc kubenswrapper[4763]: E0131 14:55:07.704884 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:15.704873746 +0000 UTC m=+35.459612049 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:07 crc kubenswrapper[4763]: E0131 14:55:07.704925 4763 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:55:07 crc kubenswrapper[4763]: E0131 14:55:07.705061 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:15.70502729 +0000 UTC m=+35.459765633 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.707507 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.707543 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.707562 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.707586 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.707602 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:07Z","lastTransitionTime":"2026-01-31T14:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.805418 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:07 crc kubenswrapper[4763]: E0131 14:55:07.805751 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:55:07 crc kubenswrapper[4763]: E0131 14:55:07.805815 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:55:07 crc kubenswrapper[4763]: E0131 14:55:07.805842 4763 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:07 crc kubenswrapper[4763]: E0131 14:55:07.805945 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:15.805913454 +0000 UTC m=+35.560651807 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.811668 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.811709 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.811721 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.811739 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.811752 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:07Z","lastTransitionTime":"2026-01-31T14:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.913720 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.913766 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.913780 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.913799 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.913815 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:07Z","lastTransitionTime":"2026-01-31T14:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.005774 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 10:37:46.451819706 +0000 UTC Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.016349 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.016428 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.016460 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.016492 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.016517 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:08Z","lastTransitionTime":"2026-01-31T14:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.119074 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.119138 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.119159 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.119185 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.119201 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:08Z","lastTransitionTime":"2026-01-31T14:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.221911 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.221991 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.222016 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.222052 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.222077 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:08Z","lastTransitionTime":"2026-01-31T14:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.276336 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerStarted","Data":"5718617532da891221042050317ce1d791697f5c1fea81c378506c5f53bf845a"} Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.276866 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.284824 4763 generic.go:334] "Generic (PLEG): container finished" podID="081252dc-3eaa-4608-8b06-16c377dff2e7" containerID="bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073" exitCode=0 Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.284872 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" event={"ID":"081252dc-3eaa-4608-8b06-16c377dff2e7","Type":"ContainerDied","Data":"bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073"} Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.291038 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.305241 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.324829 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.324866 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.324876 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.324891 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.324900 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:08Z","lastTransitionTime":"2026-01-31T14:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.326495 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.342083 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.343654 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.361764 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.372988 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.384683 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.407428 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5718617532da891221042050317ce1d791697f5c1fea81c378506c5f53bf845a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.420320 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.427569 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.427595 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.427603 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.427617 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.427626 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:08Z","lastTransitionTime":"2026-01-31T14:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.435293 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.449594 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.460603 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.476143 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.494180 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.515221 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.530451 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.530493 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.530504 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.530522 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.530536 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:08Z","lastTransitionTime":"2026-01-31T14:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.533351 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.544929 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.565506 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.582417 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.604872 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.629791 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5718617532da891221042050317ce1d791697f5c1fea81c378506c5f53bf845a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.633204 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.633238 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.633249 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.633266 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.633277 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:08Z","lastTransitionTime":"2026-01-31T14:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.644744 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.657662 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.668519 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.678965 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.692141 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.706997 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.717719 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.727578 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.735052 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.735080 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.735092 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.735106 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.735115 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:08Z","lastTransitionTime":"2026-01-31T14:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.740169 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.838272 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.838307 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.838315 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.838329 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.838339 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:08Z","lastTransitionTime":"2026-01-31T14:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.940973 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.941031 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.941048 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.941071 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.941087 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:08Z","lastTransitionTime":"2026-01-31T14:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.006752 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 10:08:53.710945004 +0000 UTC Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.041558 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.041561 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:09 crc kubenswrapper[4763]: E0131 14:55:09.041756 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:09 crc kubenswrapper[4763]: E0131 14:55:09.041873 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.041555 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:09 crc kubenswrapper[4763]: E0131 14:55:09.042149 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.043896 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.043948 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.043979 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.044003 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.044021 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:09Z","lastTransitionTime":"2026-01-31T14:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.146673 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.146782 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.146807 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.146837 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.146860 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:09Z","lastTransitionTime":"2026-01-31T14:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.249740 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.249816 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.249839 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.249867 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.249889 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:09Z","lastTransitionTime":"2026-01-31T14:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.295571 4763 generic.go:334] "Generic (PLEG): container finished" podID="081252dc-3eaa-4608-8b06-16c377dff2e7" containerID="95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5" exitCode=0 Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.295720 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" event={"ID":"081252dc-3eaa-4608-8b06-16c377dff2e7","Type":"ContainerDied","Data":"95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5"} Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.295868 4763 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.296750 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.323161 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.345872 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.354028 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.354078 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.354095 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.354117 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.354134 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:09Z","lastTransitionTime":"2026-01-31T14:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.359690 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.378270 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.391202 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.406996 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.425153 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.441322 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.454499 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.456020 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.456118 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.456222 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.456243 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.456255 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:09Z","lastTransitionTime":"2026-01-31T14:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.473503 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.492392 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5718617532da891221042050317ce1d791697f5c1fea81c378506c5f53bf845a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.515886 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.534742 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.550938 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.559172 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.559205 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.559215 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.559230 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.559240 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:09Z","lastTransitionTime":"2026-01-31T14:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.559995 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.569638 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.577871 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.590053 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.614992 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.626674 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.642507 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.655640 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.662164 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.662193 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.662201 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.662214 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.662226 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:09Z","lastTransitionTime":"2026-01-31T14:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.664458 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.683871 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5718617532da891221042050317ce1d791697f5c1fea81c378506c5f53bf845a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.700478 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.714217 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.727239 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.740420 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.750718 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.761931 4763 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.764427 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.764455 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.764463 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.764476 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.764485 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:09Z","lastTransitionTime":"2026-01-31T14:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.776780 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.792349 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.866426 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.866451 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.866461 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.866473 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.866483 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:09Z","lastTransitionTime":"2026-01-31T14:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.969554 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.969598 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.969613 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.969635 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.969653 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:09Z","lastTransitionTime":"2026-01-31T14:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.007887 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 07:15:40.281979072 +0000 UTC Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.072065 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.072097 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.072106 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.072122 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.072133 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:10Z","lastTransitionTime":"2026-01-31T14:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.177531 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.177565 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.177574 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.177587 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.177595 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:10Z","lastTransitionTime":"2026-01-31T14:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.280488 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.280537 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.280555 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.280578 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.280593 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:10Z","lastTransitionTime":"2026-01-31T14:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.303531 4763 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.304684 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" event={"ID":"081252dc-3eaa-4608-8b06-16c377dff2e7","Type":"ContainerStarted","Data":"2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4"} Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.328364 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.342461 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.358553 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.383963 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.384009 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.384026 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.384052 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.384070 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:10Z","lastTransitionTime":"2026-01-31T14:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.415893 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.428547 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.442914 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.457017 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.468787 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.485269 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.486181 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.486210 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.486224 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.486244 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.486258 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:10Z","lastTransitionTime":"2026-01-31T14:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.499774 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.513060 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.532753 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5718617532da891221042050317ce1d791697f5c1fea81c378506c5f53bf845a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.551897 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.566030 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.577583 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.587960 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.587997 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.588008 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.588027 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.588039 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:10Z","lastTransitionTime":"2026-01-31T14:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.690836 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.690885 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.690896 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.690913 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.690924 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:10Z","lastTransitionTime":"2026-01-31T14:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.794263 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.794321 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.794338 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.794365 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.794382 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:10Z","lastTransitionTime":"2026-01-31T14:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.897836 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.897914 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.897934 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.897965 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.897988 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:10Z","lastTransitionTime":"2026-01-31T14:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.000206 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.000251 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.000262 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.000282 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.000294 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:11Z","lastTransitionTime":"2026-01-31T14:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.008825 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 11:11:18.518001331 +0000 UTC Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.041114 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.041175 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:11 crc kubenswrapper[4763]: E0131 14:55:11.041381 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.041404 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:11 crc kubenswrapper[4763]: E0131 14:55:11.041530 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:11 crc kubenswrapper[4763]: E0131 14:55:11.041652 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.065782 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.077102 4763 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.085582 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.103414 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.103508 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.103533 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.103559 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.103578 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:11Z","lastTransitionTime":"2026-01-31T14:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.107284 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.125328 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.142465 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.162514 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.177414 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.189088 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.204382 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.206495 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.206524 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.206535 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.206551 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.206562 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:11Z","lastTransitionTime":"2026-01-31T14:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.217261 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.238222 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5718617532da891221042050317ce1d791697f5c1fea81c378506c5f53bf845a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.271143 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.289024 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.310981 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.312802 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.312838 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.312847 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.312862 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.312873 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:11Z","lastTransitionTime":"2026-01-31T14:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.315437 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovnkube-controller/0.log" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.321485 4763 generic.go:334] "Generic (PLEG): container finished" podID="047ce610-09fa-482b-8d29-45ad376d12b3" containerID="5718617532da891221042050317ce1d791697f5c1fea81c378506c5f53bf845a" exitCode=1 Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.321558 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerDied","Data":"5718617532da891221042050317ce1d791697f5c1fea81c378506c5f53bf845a"} Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.322617 4763 scope.go:117] "RemoveContainer" containerID="5718617532da891221042050317ce1d791697f5c1fea81c378506c5f53bf845a" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.325286 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.358344 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.378138 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.396972 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.413415 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.415481 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.415564 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.415581 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.415603 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.415619 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:11Z","lastTransitionTime":"2026-01-31T14:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.431243 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.468224 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5718617532da891221042050317ce1d791697f5c1fea81c378506c5f53bf845a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5718617532da891221042050317ce1d791697f5c1fea81c378506c5f53bf845a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:11Z\\\",\\\"message\\\":\\\"11.085355 6004 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 14:55:11.085548 6004 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 14:55:11.086307 6004 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 14:55:11.086336 6004 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 14:55:11.086359 6004 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 14:55:11.086367 6004 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 14:55:11.086392 6004 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:55:11.086437 6004 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:55:11.086444 6004 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:11.086485 6004 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:11.086536 6004 factory.go:656] Stopping watch factory\\\\nI0131 14:55:11.086560 6004 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:11.086576 6004 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 14:55:11.086588 6004 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:11.086600 6004 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.488625 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.512152 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.518805 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.518866 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.518883 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.519519 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.519612 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:11Z","lastTransitionTime":"2026-01-31T14:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.536773 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.554355 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.572605 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.592500 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.610001 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.621037 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.621219 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.621272 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.621289 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.621315 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.621332 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:11Z","lastTransitionTime":"2026-01-31T14:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.641477 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.723918 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.723962 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.723980 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.724001 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.724017 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:11Z","lastTransitionTime":"2026-01-31T14:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.826301 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.826337 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.826348 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.826362 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.826372 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:11Z","lastTransitionTime":"2026-01-31T14:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.929928 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.929975 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.929994 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.930021 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.930043 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:11Z","lastTransitionTime":"2026-01-31T14:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.009757 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 22:45:32.168763744 +0000 UTC Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.032594 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.032617 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.032624 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.032637 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.032646 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:12Z","lastTransitionTime":"2026-01-31T14:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.135190 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.135241 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.135259 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.135282 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.135299 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:12Z","lastTransitionTime":"2026-01-31T14:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.238015 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.238085 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.238109 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.238141 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.238165 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:12Z","lastTransitionTime":"2026-01-31T14:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.277456 4763 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.326928 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovnkube-controller/0.log" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.329866 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerStarted","Data":"307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518"} Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.330021 4763 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.340853 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.340907 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.340929 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.340957 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.340982 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:12Z","lastTransitionTime":"2026-01-31T14:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.346168 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.362088 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.379724 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.412536 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5718617532da891221042050317ce1d791697f5c1fea81c378506c5f53bf845a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:11Z\\\",\\\"message\\\":\\\"11.085355 6004 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 14:55:11.085548 6004 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 14:55:11.086307 6004 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 14:55:11.086336 6004 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 14:55:11.086359 6004 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 14:55:11.086367 6004 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 14:55:11.086392 6004 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:55:11.086437 6004 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:55:11.086444 6004 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:11.086485 6004 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:11.086536 6004 factory.go:656] Stopping watch factory\\\\nI0131 14:55:11.086560 6004 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:11.086576 6004 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 14:55:11.086588 6004 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:11.086600 6004 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.443095 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.443154 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.443171 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.443197 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.443218 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:12Z","lastTransitionTime":"2026-01-31T14:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.447009 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.466062 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.487815 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.508618 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.532270 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.546135 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.546213 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.546244 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.546276 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.546300 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:12Z","lastTransitionTime":"2026-01-31T14:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.556284 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.576658 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.599892 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.619471 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.635216 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.649059 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.649119 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.649136 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.649159 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.649176 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:12Z","lastTransitionTime":"2026-01-31T14:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.657814 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.752730 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.752813 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.752831 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.752857 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.752875 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:12Z","lastTransitionTime":"2026-01-31T14:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.855599 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.855654 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.855672 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.855736 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.855762 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:12Z","lastTransitionTime":"2026-01-31T14:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.958748 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.958838 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.958857 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.958883 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.958901 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:12Z","lastTransitionTime":"2026-01-31T14:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.010322 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 20:04:19.638290402 +0000 UTC Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.045277 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:13 crc kubenswrapper[4763]: E0131 14:55:13.045440 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.045968 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:13 crc kubenswrapper[4763]: E0131 14:55:13.046088 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.046160 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:13 crc kubenswrapper[4763]: E0131 14:55:13.046235 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.334383 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovnkube-controller/1.log" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.334930 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovnkube-controller/0.log" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.336978 4763 generic.go:334] "Generic (PLEG): container finished" podID="047ce610-09fa-482b-8d29-45ad376d12b3" containerID="307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518" exitCode=1 Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.337025 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerDied","Data":"307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518"} Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.337079 4763 scope.go:117] "RemoveContainer" containerID="5718617532da891221042050317ce1d791697f5c1fea81c378506c5f53bf845a" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.337948 4763 scope.go:117] "RemoveContainer" containerID="307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518" Jan 31 14:55:13 crc kubenswrapper[4763]: E0131 14:55:13.338116 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.545325 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.545380 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.545397 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.545421 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.545438 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:13Z","lastTransitionTime":"2026-01-31T14:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.558178 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.573780 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.589055 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.611029 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.644289 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.648579 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.648649 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.648668 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.648719 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.648738 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:13Z","lastTransitionTime":"2026-01-31T14:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.663747 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.681582 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.704560 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.723494 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.751502 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.751567 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.751586 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.751613 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.751633 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:13Z","lastTransitionTime":"2026-01-31T14:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.756881 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5718617532da891221042050317ce1d791697f5c1fea81c378506c5f53bf845a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:11Z\\\",\\\"message\\\":\\\"11.085355 6004 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 14:55:11.085548 6004 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 14:55:11.086307 6004 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 14:55:11.086336 6004 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 14:55:11.086359 6004 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 14:55:11.086367 6004 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 14:55:11.086392 6004 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:55:11.086437 6004 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:55:11.086444 6004 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:11.086485 6004 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:11.086536 6004 factory.go:656] Stopping watch factory\\\\nI0131 14:55:11.086560 6004 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:11.086576 6004 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 14:55:11.086588 6004 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:11.086600 6004 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:12Z\\\",\\\"message\\\":\\\"ved *v1.Node event handler 2\\\\nI0131 14:55:12.246342 6167 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:12.246366 6167 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:55:12.246614 6167 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 14:55:12.246653 6167 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:55:12.246751 6167 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:12.246781 6167 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 14:55:12.246844 6167 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 14:55:12.246855 6167 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 14:55:12.246869 6167 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 14:55:12.246911 6167 factory.go:656] Stopping watch factory\\\\nI0131 14:55:12.246936 6167 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:55:12.246949 6167 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:12.246966 6167 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:12.246968 6167 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0131 14:55:12.246988 6167 handler.go:208] Removed *v1.Pod event handler 6\\\\nF0131 14:55:12.247078 6167 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.779436 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.798628 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.818554 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.838804 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.840895 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv"] Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.841448 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.845355 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.845799 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.857133 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.857193 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.857210 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.857235 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.857254 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:13Z","lastTransitionTime":"2026-01-31T14:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.867869 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.888862 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.910286 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.928627 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07280471-d907-4c1f-a38f-9337ecb04b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8lmbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.946004 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.960557 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.960610 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.960627 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.960649 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.960666 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:13Z","lastTransitionTime":"2026-01-31T14:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.972460 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/07280471-d907-4c1f-a38f-9337ecb04b43-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8lmbv\" (UID: \"07280471-d907-4c1f-a38f-9337ecb04b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.972516 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/07280471-d907-4c1f-a38f-9337ecb04b43-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8lmbv\" (UID: \"07280471-d907-4c1f-a38f-9337ecb04b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.972572 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/07280471-d907-4c1f-a38f-9337ecb04b43-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8lmbv\" (UID: \"07280471-d907-4c1f-a38f-9337ecb04b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.972639 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzsm9\" (UniqueName: \"kubernetes.io/projected/07280471-d907-4c1f-a38f-9337ecb04b43-kube-api-access-pzsm9\") pod \"ovnkube-control-plane-749d76644c-8lmbv\" (UID: \"07280471-d907-4c1f-a38f-9337ecb04b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.973018 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.990789 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.009239 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.011270 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 22:37:59.092056569 +0000 UTC Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.028425 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.054174 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5718617532da891221042050317ce1d791697f5c1fea81c378506c5f53bf845a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:11Z\\\",\\\"message\\\":\\\"11.085355 6004 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 14:55:11.085548 6004 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 14:55:11.086307 6004 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 14:55:11.086336 6004 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 14:55:11.086359 6004 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 14:55:11.086367 6004 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 14:55:11.086392 6004 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:55:11.086437 6004 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:55:11.086444 6004 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:11.086485 6004 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:11.086536 6004 factory.go:656] Stopping watch factory\\\\nI0131 14:55:11.086560 6004 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:11.086576 6004 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 14:55:11.086588 6004 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:11.086600 6004 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:12Z\\\",\\\"message\\\":\\\"ved *v1.Node event handler 2\\\\nI0131 14:55:12.246342 6167 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:12.246366 6167 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:55:12.246614 6167 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 14:55:12.246653 6167 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:55:12.246751 6167 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:12.246781 6167 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 14:55:12.246844 6167 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 14:55:12.246855 6167 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 14:55:12.246869 6167 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 14:55:12.246911 6167 factory.go:656] Stopping watch factory\\\\nI0131 14:55:12.246936 6167 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:55:12.246949 6167 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:12.246966 6167 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:12.246968 6167 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0131 14:55:12.246988 6167 handler.go:208] Removed *v1.Pod event handler 6\\\\nF0131 14:55:12.247078 6167 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.064329 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.064365 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.064374 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.064389 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.064398 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:14Z","lastTransitionTime":"2026-01-31T14:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.074017 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/07280471-d907-4c1f-a38f-9337ecb04b43-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8lmbv\" (UID: \"07280471-d907-4c1f-a38f-9337ecb04b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.074117 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzsm9\" (UniqueName: \"kubernetes.io/projected/07280471-d907-4c1f-a38f-9337ecb04b43-kube-api-access-pzsm9\") pod \"ovnkube-control-plane-749d76644c-8lmbv\" (UID: \"07280471-d907-4c1f-a38f-9337ecb04b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.074180 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/07280471-d907-4c1f-a38f-9337ecb04b43-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8lmbv\" (UID: \"07280471-d907-4c1f-a38f-9337ecb04b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.074216 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/07280471-d907-4c1f-a38f-9337ecb04b43-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8lmbv\" (UID: \"07280471-d907-4c1f-a38f-9337ecb04b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.075361 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/07280471-d907-4c1f-a38f-9337ecb04b43-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8lmbv\" (UID: \"07280471-d907-4c1f-a38f-9337ecb04b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.075764 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/07280471-d907-4c1f-a38f-9337ecb04b43-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8lmbv\" (UID: \"07280471-d907-4c1f-a38f-9337ecb04b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.080975 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/07280471-d907-4c1f-a38f-9337ecb04b43-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8lmbv\" (UID: \"07280471-d907-4c1f-a38f-9337ecb04b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.094396 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.097937 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzsm9\" (UniqueName: \"kubernetes.io/projected/07280471-d907-4c1f-a38f-9337ecb04b43-kube-api-access-pzsm9\") pod \"ovnkube-control-plane-749d76644c-8lmbv\" (UID: \"07280471-d907-4c1f-a38f-9337ecb04b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.121645 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.138084 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.156722 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.167373 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.167530 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.167551 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.167606 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.167681 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:14Z","lastTransitionTime":"2026-01-31T14:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.169726 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.174281 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: W0131 14:55:14.190971 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07280471_d907_4c1f_a38f_9337ecb04b43.slice/crio-dc781ab56e38f793d59ad90cfa5d42a04a314aa09889fcbf1ee9e5dda77b2637 WatchSource:0}: Error finding container dc781ab56e38f793d59ad90cfa5d42a04a314aa09889fcbf1ee9e5dda77b2637: Status 404 returned error can't find the container with id dc781ab56e38f793d59ad90cfa5d42a04a314aa09889fcbf1ee9e5dda77b2637 Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.192478 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.209416 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.270774 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.270814 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.270827 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.270844 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.270856 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:14Z","lastTransitionTime":"2026-01-31T14:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.341942 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" event={"ID":"07280471-d907-4c1f-a38f-9337ecb04b43","Type":"ContainerStarted","Data":"dc781ab56e38f793d59ad90cfa5d42a04a314aa09889fcbf1ee9e5dda77b2637"} Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.343226 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovnkube-controller/1.log" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.346540 4763 scope.go:117] "RemoveContainer" containerID="307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518" Jan 31 14:55:14 crc kubenswrapper[4763]: E0131 14:55:14.346713 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.358759 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.368270 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.372927 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.373069 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.373142 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.373225 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.373286 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:14Z","lastTransitionTime":"2026-01-31T14:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.392568 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:12Z\\\",\\\"message\\\":\\\"ved *v1.Node event handler 2\\\\nI0131 14:55:12.246342 6167 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:12.246366 6167 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:55:12.246614 6167 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 14:55:12.246653 6167 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:55:12.246751 6167 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:12.246781 6167 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 14:55:12.246844 6167 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 14:55:12.246855 6167 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 14:55:12.246869 6167 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 14:55:12.246911 6167 factory.go:656] Stopping watch factory\\\\nI0131 14:55:12.246936 6167 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:55:12.246949 6167 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:12.246966 6167 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:12.246968 6167 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0131 14:55:12.246988 6167 handler.go:208] Removed *v1.Pod event handler 6\\\\nF0131 14:55:12.247078 6167 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.451246 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.473507 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.475938 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.475981 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.475993 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.476011 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.476026 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:14Z","lastTransitionTime":"2026-01-31T14:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.486910 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.500791 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.518002 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.538138 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.552241 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.571643 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.578807 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.578855 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.578865 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.578883 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.578896 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:14Z","lastTransitionTime":"2026-01-31T14:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.585581 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.605189 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.619328 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.635625 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.649423 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07280471-d907-4c1f-a38f-9337ecb04b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8lmbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.681238 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.681280 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.681289 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.681305 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.681315 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:14Z","lastTransitionTime":"2026-01-31T14:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.783963 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.784030 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.784045 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.784065 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.784080 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:14Z","lastTransitionTime":"2026-01-31T14:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.886979 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.887055 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.887075 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.887551 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.887839 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:14Z","lastTransitionTime":"2026-01-31T14:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.951913 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-26pm5"] Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.952647 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:14 crc kubenswrapper[4763]: E0131 14:55:14.952783 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.975156 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.988156 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.989752 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.989789 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.989799 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.989815 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.989826 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:14Z","lastTransitionTime":"2026-01-31T14:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.998782 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26pm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84302428-88e1-47ba-84cc-7d12472f9aa2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26pm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.008815 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.011412 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 00:37:34.044049437 +0000 UTC Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.020919 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.030898 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07280471-d907-4c1f-a38f-9337ecb04b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8lmbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.041850 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.041850 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.041989 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.042041 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.041860 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.042113 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.053031 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.064198 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.074550 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.083950 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs\") pod \"network-metrics-daemon-26pm5\" (UID: \"84302428-88e1-47ba-84cc-7d12472f9aa2\") " pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.084014 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlvp4\" (UniqueName: \"kubernetes.io/projected/84302428-88e1-47ba-84cc-7d12472f9aa2-kube-api-access-mlvp4\") pod \"network-metrics-daemon-26pm5\" (UID: \"84302428-88e1-47ba-84cc-7d12472f9aa2\") " pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.085076 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.091546 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.091620 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.091631 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.091645 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.091653 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:15Z","lastTransitionTime":"2026-01-31T14:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.097360 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.121461 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:12Z\\\",\\\"message\\\":\\\"ved *v1.Node event handler 2\\\\nI0131 14:55:12.246342 6167 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:12.246366 6167 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:55:12.246614 6167 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 14:55:12.246653 6167 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:55:12.246751 6167 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:12.246781 6167 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 14:55:12.246844 6167 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 14:55:12.246855 6167 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 14:55:12.246869 6167 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 14:55:12.246911 6167 factory.go:656] Stopping watch factory\\\\nI0131 14:55:12.246936 6167 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:55:12.246949 6167 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:12.246966 6167 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:12.246968 6167 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0131 14:55:12.246988 6167 handler.go:208] Removed *v1.Pod event handler 6\\\\nF0131 14:55:12.247078 6167 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.133872 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.148009 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.159901 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.172814 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.184585 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlvp4\" (UniqueName: \"kubernetes.io/projected/84302428-88e1-47ba-84cc-7d12472f9aa2-kube-api-access-mlvp4\") pod \"network-metrics-daemon-26pm5\" (UID: \"84302428-88e1-47ba-84cc-7d12472f9aa2\") " pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.184633 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs\") pod \"network-metrics-daemon-26pm5\" (UID: \"84302428-88e1-47ba-84cc-7d12472f9aa2\") " pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.184788 4763 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.184837 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs podName:84302428-88e1-47ba-84cc-7d12472f9aa2 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:15.684821036 +0000 UTC m=+35.439559329 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs") pod "network-metrics-daemon-26pm5" (UID: "84302428-88e1-47ba-84cc-7d12472f9aa2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.185294 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.194862 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.194912 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.194960 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.194983 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.195000 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:15Z","lastTransitionTime":"2026-01-31T14:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.207891 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlvp4\" (UniqueName: \"kubernetes.io/projected/84302428-88e1-47ba-84cc-7d12472f9aa2-kube-api-access-mlvp4\") pod \"network-metrics-daemon-26pm5\" (UID: \"84302428-88e1-47ba-84cc-7d12472f9aa2\") " pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.297712 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.297756 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.297767 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.297784 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.297796 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:15Z","lastTransitionTime":"2026-01-31T14:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.351128 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" event={"ID":"07280471-d907-4c1f-a38f-9337ecb04b43","Type":"ContainerStarted","Data":"2a1aa5110b30c48bccc0d22e7e40da48a9f0709ac2a79ffc9afce33e6113b069"} Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.351180 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" event={"ID":"07280471-d907-4c1f-a38f-9337ecb04b43","Type":"ContainerStarted","Data":"b0a68a90c84c384016b30e31a0dae292a36a85338731f096181443deec0a036e"} Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.379116 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.402131 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.402218 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.402242 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.402270 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.402291 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:15Z","lastTransitionTime":"2026-01-31T14:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.407597 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.427918 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.445676 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.459625 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.493796 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:12Z\\\",\\\"message\\\":\\\"ved *v1.Node event handler 2\\\\nI0131 14:55:12.246342 6167 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:12.246366 6167 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:55:12.246614 6167 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 14:55:12.246653 6167 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:55:12.246751 6167 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:12.246781 6167 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 14:55:12.246844 6167 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 14:55:12.246855 6167 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 14:55:12.246869 6167 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 14:55:12.246911 6167 factory.go:656] Stopping watch factory\\\\nI0131 14:55:12.246936 6167 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:55:12.246949 6167 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:12.246966 6167 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:12.246968 6167 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0131 14:55:12.246988 6167 handler.go:208] Removed *v1.Pod event handler 6\\\\nF0131 14:55:12.247078 6167 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.510770 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.510971 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.511174 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.511188 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.511207 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.511220 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:15Z","lastTransitionTime":"2026-01-31T14:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.527229 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.541966 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.555601 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.569502 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.585333 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.602537 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.614633 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.614667 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.614676 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.614716 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.614730 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:15Z","lastTransitionTime":"2026-01-31T14:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.622123 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26pm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84302428-88e1-47ba-84cc-7d12472f9aa2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26pm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.641241 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.665885 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.683880 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07280471-d907-4c1f-a38f-9337ecb04b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a68a90c84c384016b30e31a0dae292a36a85338731f096181443deec0a036e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1aa5110b30c48bccc0d22e7e40da48a9f0709ac2a79ffc9afce33e6113b069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8lmbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.690295 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs\") pod \"network-metrics-daemon-26pm5\" (UID: \"84302428-88e1-47ba-84cc-7d12472f9aa2\") " pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.690424 4763 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.690478 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs podName:84302428-88e1-47ba-84cc-7d12472f9aa2 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:16.690463347 +0000 UTC m=+36.445201650 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs") pod "network-metrics-daemon-26pm5" (UID: "84302428-88e1-47ba-84cc-7d12472f9aa2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.717505 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.717559 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.717575 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.717596 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.717613 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:15Z","lastTransitionTime":"2026-01-31T14:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.791437 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.791608 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:55:31.791575527 +0000 UTC m=+51.546313860 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.791802 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.792023 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.792083 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.792108 4763 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.792124 4763 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.792031 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.792180 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:31.792157783 +0000 UTC m=+51.546896126 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.792231 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:31.792214344 +0000 UTC m=+51.546952777 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.792271 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.792405 4763 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.792477 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:31.79245309 +0000 UTC m=+51.547191423 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.820746 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.820881 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.820901 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.820927 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.820944 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:15Z","lastTransitionTime":"2026-01-31T14:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.893230 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.893535 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.893604 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.893628 4763 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.893761 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:31.893735154 +0000 UTC m=+51.648473477 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.923966 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.924056 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.924075 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.924557 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.924620 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:15Z","lastTransitionTime":"2026-01-31T14:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.011891 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 02:40:24.276583605 +0000 UTC Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.027447 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.027504 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.027524 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.027547 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.027565 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:16Z","lastTransitionTime":"2026-01-31T14:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.130161 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.130228 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.130246 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.130271 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.130291 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:16Z","lastTransitionTime":"2026-01-31T14:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.232989 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.233041 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.233052 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.233071 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.233082 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:16Z","lastTransitionTime":"2026-01-31T14:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.336409 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.336457 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.336469 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.336486 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.336500 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:16Z","lastTransitionTime":"2026-01-31T14:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.439418 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.439510 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.439534 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.439568 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.439592 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:16Z","lastTransitionTime":"2026-01-31T14:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.543174 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.543247 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.543264 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.543292 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.543312 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:16Z","lastTransitionTime":"2026-01-31T14:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.646006 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.646126 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.646145 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.646168 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.646186 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:16Z","lastTransitionTime":"2026-01-31T14:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.703290 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs\") pod \"network-metrics-daemon-26pm5\" (UID: \"84302428-88e1-47ba-84cc-7d12472f9aa2\") " pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:16 crc kubenswrapper[4763]: E0131 14:55:16.703461 4763 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:55:16 crc kubenswrapper[4763]: E0131 14:55:16.703577 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs podName:84302428-88e1-47ba-84cc-7d12472f9aa2 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:18.703554688 +0000 UTC m=+38.458292991 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs") pod "network-metrics-daemon-26pm5" (UID: "84302428-88e1-47ba-84cc-7d12472f9aa2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.733658 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.733721 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.733768 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.733791 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.733801 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:16Z","lastTransitionTime":"2026-01-31T14:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:16 crc kubenswrapper[4763]: E0131 14:55:16.753849 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.758822 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.758888 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.758905 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.758933 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.758950 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:16Z","lastTransitionTime":"2026-01-31T14:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:16 crc kubenswrapper[4763]: E0131 14:55:16.779163 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.784235 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.784284 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.784301 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.784325 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.784342 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:16Z","lastTransitionTime":"2026-01-31T14:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:16 crc kubenswrapper[4763]: E0131 14:55:16.804194 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.809912 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.809975 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.810045 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.810068 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.810085 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:16Z","lastTransitionTime":"2026-01-31T14:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:16 crc kubenswrapper[4763]: E0131 14:55:16.833274 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.838289 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.838338 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.838356 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.838378 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.838395 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:16Z","lastTransitionTime":"2026-01-31T14:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:16 crc kubenswrapper[4763]: E0131 14:55:16.854588 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:16 crc kubenswrapper[4763]: E0131 14:55:16.854826 4763 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.857345 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.857460 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.857480 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.858068 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.858131 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:16Z","lastTransitionTime":"2026-01-31T14:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.961895 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.962014 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.962039 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.962070 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.962097 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:16Z","lastTransitionTime":"2026-01-31T14:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.013022 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 20:35:54.43267019 +0000 UTC Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.041778 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.041917 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.042000 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.042109 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:17 crc kubenswrapper[4763]: E0131 14:55:17.042150 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:17 crc kubenswrapper[4763]: E0131 14:55:17.042283 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:17 crc kubenswrapper[4763]: E0131 14:55:17.042453 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:17 crc kubenswrapper[4763]: E0131 14:55:17.042529 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.065496 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.065600 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.065624 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.065659 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.065731 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:17Z","lastTransitionTime":"2026-01-31T14:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.169884 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.169946 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.169961 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.169991 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.170010 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:17Z","lastTransitionTime":"2026-01-31T14:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.274297 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.274557 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.274602 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.274640 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.274665 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:17Z","lastTransitionTime":"2026-01-31T14:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.378272 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.378342 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.378362 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.378386 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.378403 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:17Z","lastTransitionTime":"2026-01-31T14:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.481904 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.482031 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.482054 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.482120 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.482138 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:17Z","lastTransitionTime":"2026-01-31T14:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.585295 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.585413 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.585435 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.585463 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.585481 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:17Z","lastTransitionTime":"2026-01-31T14:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.688759 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.688807 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.688822 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.688846 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.688862 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:17Z","lastTransitionTime":"2026-01-31T14:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.792429 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.792509 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.792529 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.792552 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.792570 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:17Z","lastTransitionTime":"2026-01-31T14:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.895214 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.895277 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.895298 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.895322 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.895339 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:17Z","lastTransitionTime":"2026-01-31T14:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.997982 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.998034 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.998055 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.998075 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.998088 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:17Z","lastTransitionTime":"2026-01-31T14:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.013435 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 01:41:46.202879749 +0000 UTC Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.100752 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.100882 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.100902 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.100930 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.100949 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:18Z","lastTransitionTime":"2026-01-31T14:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.204198 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.204285 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.204305 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.204330 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.204347 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:18Z","lastTransitionTime":"2026-01-31T14:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.307299 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.307357 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.307376 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.307400 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.307416 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:18Z","lastTransitionTime":"2026-01-31T14:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.411204 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.411326 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.411379 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.411405 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.411456 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:18Z","lastTransitionTime":"2026-01-31T14:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.514192 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.514289 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.514312 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.514384 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.514405 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:18Z","lastTransitionTime":"2026-01-31T14:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.618171 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.618255 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.618274 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.618299 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.618323 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:18Z","lastTransitionTime":"2026-01-31T14:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.722043 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.722106 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.722125 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.722150 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.722171 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:18Z","lastTransitionTime":"2026-01-31T14:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.725886 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs\") pod \"network-metrics-daemon-26pm5\" (UID: \"84302428-88e1-47ba-84cc-7d12472f9aa2\") " pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:18 crc kubenswrapper[4763]: E0131 14:55:18.726080 4763 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:55:18 crc kubenswrapper[4763]: E0131 14:55:18.726183 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs podName:84302428-88e1-47ba-84cc-7d12472f9aa2 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:22.726157182 +0000 UTC m=+42.480895515 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs") pod "network-metrics-daemon-26pm5" (UID: "84302428-88e1-47ba-84cc-7d12472f9aa2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.824835 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.824902 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.824919 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.824943 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.824961 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:18Z","lastTransitionTime":"2026-01-31T14:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.928902 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.928950 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.928962 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.928979 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.928992 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:18Z","lastTransitionTime":"2026-01-31T14:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.013784 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 16:12:21.830993003 +0000 UTC Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.031062 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.031118 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.031136 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.031161 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.031181 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:19Z","lastTransitionTime":"2026-01-31T14:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.041725 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.041729 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.041890 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:19 crc kubenswrapper[4763]: E0131 14:55:19.041990 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.042096 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:19 crc kubenswrapper[4763]: E0131 14:55:19.042239 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:19 crc kubenswrapper[4763]: E0131 14:55:19.042332 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:19 crc kubenswrapper[4763]: E0131 14:55:19.042393 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.133843 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.133933 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.133950 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.134009 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.134028 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:19Z","lastTransitionTime":"2026-01-31T14:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.237471 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.237538 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.237558 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.237582 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.237599 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:19Z","lastTransitionTime":"2026-01-31T14:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.340800 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.340848 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.340865 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.340888 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.340904 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:19Z","lastTransitionTime":"2026-01-31T14:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.444401 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.444458 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.444476 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.444500 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.444520 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:19Z","lastTransitionTime":"2026-01-31T14:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.547504 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.547589 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.547612 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.547643 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.547667 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:19Z","lastTransitionTime":"2026-01-31T14:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.651047 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.651115 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.651133 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.651158 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.651177 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:19Z","lastTransitionTime":"2026-01-31T14:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.754901 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.754975 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.754998 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.755024 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.755041 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:19Z","lastTransitionTime":"2026-01-31T14:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.857575 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.857646 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.857665 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.857723 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.857749 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:19Z","lastTransitionTime":"2026-01-31T14:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.960986 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.961038 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.961055 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.961077 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.961093 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:19Z","lastTransitionTime":"2026-01-31T14:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.014647 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 06:24:05.754594385 +0000 UTC Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.064445 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.064587 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.064617 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.064646 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.064669 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:20Z","lastTransitionTime":"2026-01-31T14:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.167816 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.167872 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.167889 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.167911 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.167927 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:20Z","lastTransitionTime":"2026-01-31T14:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.270840 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.270890 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.270906 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.270927 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.270945 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:20Z","lastTransitionTime":"2026-01-31T14:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.373122 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.373189 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.373212 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.373242 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.373264 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:20Z","lastTransitionTime":"2026-01-31T14:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.476446 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.476497 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.476513 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.476537 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.476553 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:20Z","lastTransitionTime":"2026-01-31T14:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.579380 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.579462 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.579487 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.579518 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.579542 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:20Z","lastTransitionTime":"2026-01-31T14:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.683171 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.683245 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.683266 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.683293 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.683313 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:20Z","lastTransitionTime":"2026-01-31T14:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.786358 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.786420 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.786436 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.786460 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.786480 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:20Z","lastTransitionTime":"2026-01-31T14:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.889959 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.890048 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.890072 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.890105 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.890130 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:20Z","lastTransitionTime":"2026-01-31T14:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.993028 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.993098 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.993118 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.993144 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.993164 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:20Z","lastTransitionTime":"2026-01-31T14:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.015865 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 10:16:01.214346685 +0000 UTC Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.041430 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.041528 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.041468 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.041475 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:21 crc kubenswrapper[4763]: E0131 14:55:21.041722 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:21 crc kubenswrapper[4763]: E0131 14:55:21.041808 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:21 crc kubenswrapper[4763]: E0131 14:55:21.041901 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:21 crc kubenswrapper[4763]: E0131 14:55:21.041958 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.059256 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:21Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.079819 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:21Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.096298 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.096335 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.096347 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.096363 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.096375 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:21Z","lastTransitionTime":"2026-01-31T14:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.096471 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:21Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.113554 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:21Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.129574 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:21Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.147757 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:21Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.164767 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:21Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.180185 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26pm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84302428-88e1-47ba-84cc-7d12472f9aa2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26pm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:21Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.194987 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:21Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.198965 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.199024 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.199041 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.199068 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.199085 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:21Z","lastTransitionTime":"2026-01-31T14:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.220415 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:21Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.235596 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07280471-d907-4c1f-a38f-9337ecb04b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a68a90c84c384016b30e31a0dae292a36a85338731f096181443deec0a036e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1aa5110b30c48bccc0d22e7e40da48a9f0709ac2a79ffc9afce33e6113b069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8lmbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:21Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.260576 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:12Z\\\",\\\"message\\\":\\\"ved *v1.Node event handler 2\\\\nI0131 14:55:12.246342 6167 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:12.246366 6167 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:55:12.246614 6167 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 14:55:12.246653 6167 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:55:12.246751 6167 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:12.246781 6167 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 14:55:12.246844 6167 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 14:55:12.246855 6167 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 14:55:12.246869 6167 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 14:55:12.246911 6167 factory.go:656] Stopping watch factory\\\\nI0131 14:55:12.246936 6167 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:55:12.246949 6167 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:12.246966 6167 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:12.246968 6167 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0131 14:55:12.246988 6167 handler.go:208] Removed *v1.Pod event handler 6\\\\nF0131 14:55:12.247078 6167 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:21Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.287793 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:21Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.302332 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.302414 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.302428 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.302451 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.302495 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:21Z","lastTransitionTime":"2026-01-31T14:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.304056 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:21Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.318433 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:21Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.333156 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:21Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.347577 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:21Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.404809 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.404921 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.404965 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.404993 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.405010 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:21Z","lastTransitionTime":"2026-01-31T14:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.509917 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.510025 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.510051 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.510086 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.510111 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:21Z","lastTransitionTime":"2026-01-31T14:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.613277 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.613353 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.613372 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.613397 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.613414 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:21Z","lastTransitionTime":"2026-01-31T14:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.716572 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.716635 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.716651 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.716676 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.716718 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:21Z","lastTransitionTime":"2026-01-31T14:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.819919 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.819990 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.820012 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.820042 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.820089 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:21Z","lastTransitionTime":"2026-01-31T14:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.922560 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.922609 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.922617 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.922632 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.922641 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:21Z","lastTransitionTime":"2026-01-31T14:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.016319 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 05:58:56.998646201 +0000 UTC Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.025257 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.025284 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.025293 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.025305 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.025314 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:22Z","lastTransitionTime":"2026-01-31T14:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.128555 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.128622 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.128640 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.128664 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.128681 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:22Z","lastTransitionTime":"2026-01-31T14:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.231383 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.231443 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.231460 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.231483 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.231499 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:22Z","lastTransitionTime":"2026-01-31T14:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.334154 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.334229 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.334251 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.334281 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.334303 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:22Z","lastTransitionTime":"2026-01-31T14:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.437092 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.437174 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.437194 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.437217 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.437236 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:22Z","lastTransitionTime":"2026-01-31T14:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.540280 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.540364 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.540387 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.540419 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.540444 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:22Z","lastTransitionTime":"2026-01-31T14:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.643170 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.643241 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.643257 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.643282 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.643301 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:22Z","lastTransitionTime":"2026-01-31T14:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.746766 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.746828 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.746847 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.746870 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.746886 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:22Z","lastTransitionTime":"2026-01-31T14:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.769500 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs\") pod \"network-metrics-daemon-26pm5\" (UID: \"84302428-88e1-47ba-84cc-7d12472f9aa2\") " pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:22 crc kubenswrapper[4763]: E0131 14:55:22.769768 4763 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:55:22 crc kubenswrapper[4763]: E0131 14:55:22.769852 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs podName:84302428-88e1-47ba-84cc-7d12472f9aa2 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:30.769830892 +0000 UTC m=+50.524569215 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs") pod "network-metrics-daemon-26pm5" (UID: "84302428-88e1-47ba-84cc-7d12472f9aa2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.849635 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.849722 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.849742 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.849767 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.849785 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:22Z","lastTransitionTime":"2026-01-31T14:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.953859 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.953933 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.953953 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.953981 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.954000 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:22Z","lastTransitionTime":"2026-01-31T14:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.016854 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 06:20:52.643231486 +0000 UTC Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.041423 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.041542 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:23 crc kubenswrapper[4763]: E0131 14:55:23.041585 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.041603 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.041742 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:23 crc kubenswrapper[4763]: E0131 14:55:23.042136 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:23 crc kubenswrapper[4763]: E0131 14:55:23.042278 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:23 crc kubenswrapper[4763]: E0131 14:55:23.042418 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.057281 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.057435 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.057462 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.057486 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.057508 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:23Z","lastTransitionTime":"2026-01-31T14:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.160635 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.160760 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.160783 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.160809 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.160827 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:23Z","lastTransitionTime":"2026-01-31T14:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.263747 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.263815 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.263837 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.263865 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.263887 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:23Z","lastTransitionTime":"2026-01-31T14:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.367200 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.367269 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.367286 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.367311 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.367329 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:23Z","lastTransitionTime":"2026-01-31T14:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.470735 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.470778 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.470787 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.470800 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.470810 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:23Z","lastTransitionTime":"2026-01-31T14:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.573309 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.573377 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.573400 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.573429 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.573449 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:23Z","lastTransitionTime":"2026-01-31T14:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.676283 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.676337 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.676354 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.676377 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.676394 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:23Z","lastTransitionTime":"2026-01-31T14:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.779518 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.779592 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.779609 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.779636 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.779651 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:23Z","lastTransitionTime":"2026-01-31T14:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.883530 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.883609 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.883627 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.883657 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.883680 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:23Z","lastTransitionTime":"2026-01-31T14:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.987040 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.987129 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.987149 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.987173 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.987191 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:23Z","lastTransitionTime":"2026-01-31T14:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.017272 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 13:27:10.899969901 +0000 UTC Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.090029 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.090197 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.090222 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.090254 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.090275 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:24Z","lastTransitionTime":"2026-01-31T14:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.194469 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.194822 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.194882 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.194906 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.194923 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:24Z","lastTransitionTime":"2026-01-31T14:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.299107 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.299177 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.299194 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.299219 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.299237 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:24Z","lastTransitionTime":"2026-01-31T14:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.401597 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.401634 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.401645 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.401662 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.401674 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:24Z","lastTransitionTime":"2026-01-31T14:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.504865 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.504936 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.504959 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.504985 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.505003 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:24Z","lastTransitionTime":"2026-01-31T14:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.607748 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.607795 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.607807 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.607824 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.607836 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:24Z","lastTransitionTime":"2026-01-31T14:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.712437 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.712508 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.712525 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.712550 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.712568 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:24Z","lastTransitionTime":"2026-01-31T14:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.816352 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.816412 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.816429 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.816452 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.816469 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:24Z","lastTransitionTime":"2026-01-31T14:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.919846 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.919913 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.919930 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.919956 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.919975 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:24Z","lastTransitionTime":"2026-01-31T14:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.018209 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 00:24:39.015603697 +0000 UTC Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.023003 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.023086 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.023106 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.023133 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.023154 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:25Z","lastTransitionTime":"2026-01-31T14:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.041790 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.041823 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.041852 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:25 crc kubenswrapper[4763]: E0131 14:55:25.042057 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.042117 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:25 crc kubenswrapper[4763]: E0131 14:55:25.042248 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:25 crc kubenswrapper[4763]: E0131 14:55:25.042369 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:25 crc kubenswrapper[4763]: E0131 14:55:25.042449 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.127205 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.127270 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.127286 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.127311 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.127332 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:25Z","lastTransitionTime":"2026-01-31T14:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.231599 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.231688 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.231728 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.231758 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.231779 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:25Z","lastTransitionTime":"2026-01-31T14:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.335053 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.335126 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.335149 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.335177 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.335194 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:25Z","lastTransitionTime":"2026-01-31T14:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.438622 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.438732 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.438760 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.438790 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.438844 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:25Z","lastTransitionTime":"2026-01-31T14:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.541627 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.541736 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.541766 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.541825 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.541847 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:25Z","lastTransitionTime":"2026-01-31T14:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.644588 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.644663 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.644681 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.644739 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.644765 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:25Z","lastTransitionTime":"2026-01-31T14:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.748295 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.748358 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.748375 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.748398 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.748418 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:25Z","lastTransitionTime":"2026-01-31T14:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.851171 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.851248 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.851272 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.851306 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.851329 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:25Z","lastTransitionTime":"2026-01-31T14:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.954410 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.954485 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.954509 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.954537 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.954573 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:25Z","lastTransitionTime":"2026-01-31T14:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.018825 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 17:03:14.739357329 +0000 UTC Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.042318 4763 scope.go:117] "RemoveContainer" containerID="307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.057275 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.057357 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.057380 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.057409 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.057431 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:26Z","lastTransitionTime":"2026-01-31T14:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.160546 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.160618 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.160634 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.160653 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.160669 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:26Z","lastTransitionTime":"2026-01-31T14:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.264355 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.264410 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.264421 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.264446 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.264459 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:26Z","lastTransitionTime":"2026-01-31T14:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.367610 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.367674 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.367718 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.367743 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.367759 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:26Z","lastTransitionTime":"2026-01-31T14:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.396027 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovnkube-controller/1.log" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.400281 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerStarted","Data":"e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25"} Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.400500 4763 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.424942 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.444185 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.463271 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.470419 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.470496 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.470519 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.470549 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.470570 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:26Z","lastTransitionTime":"2026-01-31T14:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.488986 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.509153 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.531984 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.553603 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.568758 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26pm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84302428-88e1-47ba-84cc-7d12472f9aa2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26pm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.573347 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.573387 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.573398 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.573419 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.573432 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:26Z","lastTransitionTime":"2026-01-31T14:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.582419 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.603730 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.614372 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07280471-d907-4c1f-a38f-9337ecb04b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a68a90c84c384016b30e31a0dae292a36a85338731f096181443deec0a036e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1aa5110b30c48bccc0d22e7e40da48a9f0709ac2a79ffc9afce33e6113b069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8lmbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.624683 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.641051 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:12Z\\\",\\\"message\\\":\\\"ved *v1.Node event handler 2\\\\nI0131 14:55:12.246342 6167 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:12.246366 6167 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:55:12.246614 6167 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 14:55:12.246653 6167 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:55:12.246751 6167 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:12.246781 6167 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 14:55:12.246844 6167 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 14:55:12.246855 6167 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 14:55:12.246869 6167 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 14:55:12.246911 6167 factory.go:656] Stopping watch factory\\\\nI0131 14:55:12.246936 6167 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:55:12.246949 6167 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:12.246966 6167 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:12.246968 6167 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0131 14:55:12.246988 6167 handler.go:208] Removed *v1.Pod event handler 6\\\\nF0131 14:55:12.247078 6167 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.657470 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.694295 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.696031 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.696071 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.696082 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.696103 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.696115 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:26Z","lastTransitionTime":"2026-01-31T14:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.714144 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.725358 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.798893 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.798946 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.798961 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.798982 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.798998 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:26Z","lastTransitionTime":"2026-01-31T14:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.901346 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.901407 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.901431 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.901460 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.901481 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:26Z","lastTransitionTime":"2026-01-31T14:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.989026 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.004268 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.004340 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.004358 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.004382 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.004401 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:27Z","lastTransitionTime":"2026-01-31T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.006145 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.006186 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.006205 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.006229 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.006246 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:27Z","lastTransitionTime":"2026-01-31T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.019812 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 16:59:18.069463573 +0000 UTC Jan 31 14:55:27 crc kubenswrapper[4763]: E0131 14:55:27.031811 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.036442 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.036504 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.036521 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.036545 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.036562 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:27Z","lastTransitionTime":"2026-01-31T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.041529 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.041595 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.041644 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:27 crc kubenswrapper[4763]: E0131 14:55:27.041683 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.041756 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:27 crc kubenswrapper[4763]: E0131 14:55:27.041788 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:27 crc kubenswrapper[4763]: E0131 14:55:27.041920 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:27 crc kubenswrapper[4763]: E0131 14:55:27.042145 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:27 crc kubenswrapper[4763]: E0131 14:55:27.055481 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.061281 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.061325 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.061335 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.061352 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.061364 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:27Z","lastTransitionTime":"2026-01-31T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:27 crc kubenswrapper[4763]: E0131 14:55:27.073323 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.076623 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.076650 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.076659 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.076673 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.076750 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:27Z","lastTransitionTime":"2026-01-31T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:27 crc kubenswrapper[4763]: E0131 14:55:27.088201 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.092475 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.092540 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.092556 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.092578 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.092614 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:27Z","lastTransitionTime":"2026-01-31T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:27 crc kubenswrapper[4763]: E0131 14:55:27.104641 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: E0131 14:55:27.104770 4763 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.106174 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.106219 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.106231 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.106247 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.106260 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:27Z","lastTransitionTime":"2026-01-31T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.209116 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.209158 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.209169 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.209184 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.209196 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:27Z","lastTransitionTime":"2026-01-31T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.312599 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.312661 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.312685 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.312749 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.312772 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:27Z","lastTransitionTime":"2026-01-31T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.407314 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovnkube-controller/2.log" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.408480 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovnkube-controller/1.log" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.415448 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.415846 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.416086 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.416298 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.416511 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:27Z","lastTransitionTime":"2026-01-31T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.417403 4763 generic.go:334] "Generic (PLEG): container finished" podID="047ce610-09fa-482b-8d29-45ad376d12b3" containerID="e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25" exitCode=1 Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.417452 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerDied","Data":"e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25"} Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.417489 4763 scope.go:117] "RemoveContainer" containerID="307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.418567 4763 scope.go:117] "RemoveContainer" containerID="e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25" Jan 31 14:55:27 crc kubenswrapper[4763]: E0131 14:55:27.418835 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.446164 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.466202 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.482594 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26pm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84302428-88e1-47ba-84cc-7d12472f9aa2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26pm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.497547 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.519582 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.519614 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.519622 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.519634 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.519643 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:27Z","lastTransitionTime":"2026-01-31T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.520665 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.535090 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07280471-d907-4c1f-a38f-9337ecb04b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a68a90c84c384016b30e31a0dae292a36a85338731f096181443deec0a036e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1aa5110b30c48bccc0d22e7e40da48a9f0709ac2a79ffc9afce33e6113b069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8lmbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.567450 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.585787 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.604108 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.621051 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.623103 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.623138 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.623150 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.623166 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.623178 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:27Z","lastTransitionTime":"2026-01-31T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.639264 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.672315 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:12Z\\\",\\\"message\\\":\\\"ved *v1.Node event handler 2\\\\nI0131 14:55:12.246342 6167 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:12.246366 6167 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:55:12.246614 6167 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 14:55:12.246653 6167 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:55:12.246751 6167 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:12.246781 6167 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 14:55:12.246844 6167 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 14:55:12.246855 6167 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 14:55:12.246869 6167 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 14:55:12.246911 6167 factory.go:656] Stopping watch factory\\\\nI0131 14:55:12.246936 6167 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:55:12.246949 6167 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:12.246966 6167 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:12.246968 6167 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0131 14:55:12.246988 6167 handler.go:208] Removed *v1.Pod event handler 6\\\\nF0131 14:55:12.247078 6167 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\":190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:27.057627 6365 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:55:27.057670 6365 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:27.057650 6365 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 14:55:27.057676 6365 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:27.057685 6365 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:55:27.057715 6365 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 14:55:27.057680 6365 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:27.057645 6365 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:55:27.057741 6365 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 14:55:27.057774 6365 factory.go:656] Stopping watch factory\\\\nI0131 14:55:27.057790 6365 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:55:27.058116 6365 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 14:55:27.058250 6365 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 14:55:27.058320 6365 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:55:27.058365 6365 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 14:55:27.058474 6365 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.693204 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.712302 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.726243 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.726271 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.726281 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.726296 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.726307 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:27Z","lastTransitionTime":"2026-01-31T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.732950 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.751896 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.767972 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.829414 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.829852 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.830022 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.830205 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.830384 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:27Z","lastTransitionTime":"2026-01-31T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.933641 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.934059 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.934222 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.934380 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.934514 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:27Z","lastTransitionTime":"2026-01-31T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.020606 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 15:21:19.402261649 +0000 UTC Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.038308 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.038370 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.038389 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.038411 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.038428 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:28Z","lastTransitionTime":"2026-01-31T14:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.141111 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.141448 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.141623 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.141903 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.142097 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:28Z","lastTransitionTime":"2026-01-31T14:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.244850 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.244915 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.244934 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.244956 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.244975 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:28Z","lastTransitionTime":"2026-01-31T14:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.347369 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.347396 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.347404 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.347417 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.347426 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:28Z","lastTransitionTime":"2026-01-31T14:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.422428 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovnkube-controller/2.log" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.426385 4763 scope.go:117] "RemoveContainer" containerID="e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25" Jan 31 14:55:28 crc kubenswrapper[4763]: E0131 14:55:28.426708 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.440977 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.449923 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.449957 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.449966 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.449979 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.449988 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:28Z","lastTransitionTime":"2026-01-31T14:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.456782 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.466730 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07280471-d907-4c1f-a38f-9337ecb04b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a68a90c84c384016b30e31a0dae292a36a85338731f096181443deec0a036e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1aa5110b30c48bccc0d22e7e40da48a9f0709ac2a79ffc9afce33e6113b069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8lmbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.474715 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.483534 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.505366 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\":190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:27.057627 6365 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:55:27.057670 6365 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:27.057650 6365 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 14:55:27.057676 6365 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:27.057685 6365 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:55:27.057715 6365 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 14:55:27.057680 6365 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:27.057645 6365 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:55:27.057741 6365 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 14:55:27.057774 6365 factory.go:656] Stopping watch factory\\\\nI0131 14:55:27.057790 6365 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:55:27.058116 6365 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 14:55:27.058250 6365 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 14:55:27.058320 6365 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:55:27.058365 6365 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 14:55:27.058474 6365 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.537393 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.552533 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.552820 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.552958 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.553105 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.553261 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:28Z","lastTransitionTime":"2026-01-31T14:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.559851 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.575069 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.587624 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.603753 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.621900 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.638005 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.655237 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.656555 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.656584 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.656595 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.656612 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.656623 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:28Z","lastTransitionTime":"2026-01-31T14:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.670738 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.684259 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.697288 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26pm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84302428-88e1-47ba-84cc-7d12472f9aa2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26pm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.759103 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.759160 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.759178 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.759201 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.759219 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:28Z","lastTransitionTime":"2026-01-31T14:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.862478 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.862530 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.862547 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.862569 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.862585 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:28Z","lastTransitionTime":"2026-01-31T14:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.965285 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.965363 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.965386 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.965413 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.965430 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:28Z","lastTransitionTime":"2026-01-31T14:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.022509 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 21:36:10.071461327 +0000 UTC Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.041032 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.041227 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.041127 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.041078 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:29 crc kubenswrapper[4763]: E0131 14:55:29.041733 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:29 crc kubenswrapper[4763]: E0131 14:55:29.041927 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:29 crc kubenswrapper[4763]: E0131 14:55:29.042138 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:29 crc kubenswrapper[4763]: E0131 14:55:29.042304 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.068225 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.068281 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.068346 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.068379 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.068404 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:29Z","lastTransitionTime":"2026-01-31T14:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.171114 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.171175 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.171193 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.171219 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.171237 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:29Z","lastTransitionTime":"2026-01-31T14:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.274342 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.274427 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.274452 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.274481 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.274503 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:29Z","lastTransitionTime":"2026-01-31T14:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.377671 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.377769 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.377788 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.377821 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.377846 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:29Z","lastTransitionTime":"2026-01-31T14:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.480650 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.480721 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.480734 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.480751 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.480764 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:29Z","lastTransitionTime":"2026-01-31T14:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.584435 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.584483 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.584501 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.584523 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.584540 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:29Z","lastTransitionTime":"2026-01-31T14:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.687510 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.687559 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.687575 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.687598 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.687614 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:29Z","lastTransitionTime":"2026-01-31T14:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.790372 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.790438 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.790460 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.790489 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.790511 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:29Z","lastTransitionTime":"2026-01-31T14:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.893466 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.893522 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.893533 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.893553 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.893565 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:29Z","lastTransitionTime":"2026-01-31T14:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.948066 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.963916 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.971068 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07280471-d907-4c1f-a38f-9337ecb04b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a68a90c84c384016b30e31a0dae292a36a85338731f096181443deec0a036e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1aa5110b30c48bccc0d22e7e40da48a9f0709ac2a79ffc9afce33e6113b069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8lmbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:29Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.986881 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:29Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.997837 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.997881 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.997895 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.997912 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.997923 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:29Z","lastTransitionTime":"2026-01-31T14:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.010683 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.023646 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 14:02:07.918497025 +0000 UTC Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.030467 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.042464 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.058124 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.086660 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\":190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:27.057627 6365 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:55:27.057670 6365 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:27.057650 6365 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 14:55:27.057676 6365 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:27.057685 6365 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:55:27.057715 6365 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 14:55:27.057680 6365 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:27.057645 6365 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:55:27.057741 6365 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 14:55:27.057774 6365 factory.go:656] Stopping watch factory\\\\nI0131 14:55:27.057790 6365 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:55:27.058116 6365 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 14:55:27.058250 6365 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 14:55:27.058320 6365 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:55:27.058365 6365 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 14:55:27.058474 6365 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.100772 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.100833 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.100889 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.100914 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.100934 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:30Z","lastTransitionTime":"2026-01-31T14:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.120936 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.142078 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.159030 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.173797 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.191013 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.203422 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.203455 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.203467 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.203485 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.203503 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:30Z","lastTransitionTime":"2026-01-31T14:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.203597 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.219976 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.237662 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.257311 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.273378 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26pm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84302428-88e1-47ba-84cc-7d12472f9aa2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26pm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.306660 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.307458 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.307517 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.307546 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.307566 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:30Z","lastTransitionTime":"2026-01-31T14:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.411096 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.411162 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.411180 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.411204 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.411221 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:30Z","lastTransitionTime":"2026-01-31T14:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.514118 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.514168 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.514187 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.514212 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.514231 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:30Z","lastTransitionTime":"2026-01-31T14:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.616811 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.616887 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.616915 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.616945 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.616967 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:30Z","lastTransitionTime":"2026-01-31T14:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.720069 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.720119 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.720135 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.720160 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.720176 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:30Z","lastTransitionTime":"2026-01-31T14:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.823131 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.823173 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.823191 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.823214 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.823230 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:30Z","lastTransitionTime":"2026-01-31T14:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.867443 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs\") pod \"network-metrics-daemon-26pm5\" (UID: \"84302428-88e1-47ba-84cc-7d12472f9aa2\") " pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:30 crc kubenswrapper[4763]: E0131 14:55:30.867631 4763 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:55:30 crc kubenswrapper[4763]: E0131 14:55:30.867725 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs podName:84302428-88e1-47ba-84cc-7d12472f9aa2 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:46.867676212 +0000 UTC m=+66.622414535 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs") pod "network-metrics-daemon-26pm5" (UID: "84302428-88e1-47ba-84cc-7d12472f9aa2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.926295 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.926367 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.926384 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.926408 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.926429 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:30Z","lastTransitionTime":"2026-01-31T14:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.024483 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 05:51:33.24137605 +0000 UTC Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.030221 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.030277 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.030298 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.030325 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.030347 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:31Z","lastTransitionTime":"2026-01-31T14:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.041391 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.041455 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.041517 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:31 crc kubenswrapper[4763]: E0131 14:55:31.041576 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:31 crc kubenswrapper[4763]: E0131 14:55:31.041682 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.041761 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:31 crc kubenswrapper[4763]: E0131 14:55:31.041881 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:31 crc kubenswrapper[4763]: E0131 14:55:31.041980 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.062579 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.083078 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.105091 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.128900 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.133583 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.133674 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.133732 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.133767 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.133791 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:31Z","lastTransitionTime":"2026-01-31T14:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.152038 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.171590 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.186410 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.203124 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26pm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84302428-88e1-47ba-84cc-7d12472f9aa2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26pm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.220378 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.236561 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.236638 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.236657 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.236682 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.236746 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:31Z","lastTransitionTime":"2026-01-31T14:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.241017 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.259743 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07280471-d907-4c1f-a38f-9337ecb04b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a68a90c84c384016b30e31a0dae292a36a85338731f096181443deec0a036e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1aa5110b30c48bccc0d22e7e40da48a9f0709ac2a79ffc9afce33e6113b069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8lmbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.273882 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.287855 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.319141 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\":190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:27.057627 6365 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:55:27.057670 6365 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:27.057650 6365 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 14:55:27.057676 6365 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:27.057685 6365 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:55:27.057715 6365 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 14:55:27.057680 6365 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:27.057645 6365 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:55:27.057741 6365 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 14:55:27.057774 6365 factory.go:656] Stopping watch factory\\\\nI0131 14:55:27.057790 6365 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:55:27.058116 6365 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 14:55:27.058250 6365 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 14:55:27.058320 6365 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:55:27.058365 6365 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 14:55:27.058474 6365 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.332362 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dac69ab-f0fe-47f2-b03a-a78a0ede4fdf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a88a2346ee1adf65c2806d13514d8c012ea043785bf808123e8639f67f956f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98b72a7d666db4ba071d18272e07a6c063ed4a68c874d76621958647a9ac43fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://841434ac2aa4cdb9dc1a36bb53d7b10fa7b1b70602a81d0a2aef23bd1ededc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.339876 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.339990 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.340023 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.340061 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.340100 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:31Z","lastTransitionTime":"2026-01-31T14:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.361033 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.376268 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.396588 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.442655 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.442723 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.442737 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.442752 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.442764 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:31Z","lastTransitionTime":"2026-01-31T14:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.546396 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.546464 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.546483 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.546507 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.546525 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:31Z","lastTransitionTime":"2026-01-31T14:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.649123 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.649462 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.649658 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.649953 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.650148 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:31Z","lastTransitionTime":"2026-01-31T14:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.754283 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.754528 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.754538 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.754552 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.754562 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:31Z","lastTransitionTime":"2026-01-31T14:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.857570 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.857628 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.857645 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.857672 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.857690 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:31Z","lastTransitionTime":"2026-01-31T14:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.876169 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.876308 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:31 crc kubenswrapper[4763]: E0131 14:55:31.876328 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:56:03.876297932 +0000 UTC m=+83.631036265 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.876414 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.876455 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:31 crc kubenswrapper[4763]: E0131 14:55:31.876505 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:55:31 crc kubenswrapper[4763]: E0131 14:55:31.876549 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:55:31 crc kubenswrapper[4763]: E0131 14:55:31.876568 4763 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:31 crc kubenswrapper[4763]: E0131 14:55:31.876596 4763 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:55:31 crc kubenswrapper[4763]: E0131 14:55:31.876654 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 14:56:03.876629352 +0000 UTC m=+83.631367675 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:31 crc kubenswrapper[4763]: E0131 14:55:31.876653 4763 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:55:31 crc kubenswrapper[4763]: E0131 14:55:31.876685 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:56:03.876672463 +0000 UTC m=+83.631410796 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:55:31 crc kubenswrapper[4763]: E0131 14:55:31.876842 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:56:03.876811947 +0000 UTC m=+83.631550280 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.961097 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.961152 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.961170 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.961205 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.961242 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:31Z","lastTransitionTime":"2026-01-31T14:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.978033 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:31 crc kubenswrapper[4763]: E0131 14:55:31.978228 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:55:31 crc kubenswrapper[4763]: E0131 14:55:31.978270 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:55:31 crc kubenswrapper[4763]: E0131 14:55:31.978290 4763 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:31 crc kubenswrapper[4763]: E0131 14:55:31.978372 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 14:56:03.978343707 +0000 UTC m=+83.733082030 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.025624 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 22:17:24.84727606 +0000 UTC Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.064924 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.064978 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.064994 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.065016 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.065032 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:32Z","lastTransitionTime":"2026-01-31T14:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.167624 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.167732 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.167760 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.167791 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.167812 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:32Z","lastTransitionTime":"2026-01-31T14:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.270540 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.270580 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.270591 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.270610 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.270631 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:32Z","lastTransitionTime":"2026-01-31T14:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.373897 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.374246 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.374395 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.374551 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.374728 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:32Z","lastTransitionTime":"2026-01-31T14:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.478015 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.478057 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.478069 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.478085 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.478096 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:32Z","lastTransitionTime":"2026-01-31T14:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.581272 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.581354 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.581372 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.581397 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.581415 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:32Z","lastTransitionTime":"2026-01-31T14:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.684004 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.684331 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.684540 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.684854 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.685084 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:32Z","lastTransitionTime":"2026-01-31T14:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.787552 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.787584 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.787593 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.787606 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.787614 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:32Z","lastTransitionTime":"2026-01-31T14:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.890257 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.890344 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.890364 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.890389 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.890406 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:32Z","lastTransitionTime":"2026-01-31T14:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.993603 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.993766 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.993787 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.993809 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.993826 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:32Z","lastTransitionTime":"2026-01-31T14:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.026352 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 15:23:34.486782094 +0000 UTC Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.040772 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.040860 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:33 crc kubenswrapper[4763]: E0131 14:55:33.041659 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.040957 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.040910 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:33 crc kubenswrapper[4763]: E0131 14:55:33.042262 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:33 crc kubenswrapper[4763]: E0131 14:55:33.042403 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:33 crc kubenswrapper[4763]: E0131 14:55:33.042532 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.096305 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.096364 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.096648 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.097004 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.097070 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:33Z","lastTransitionTime":"2026-01-31T14:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.201759 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.201857 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.201882 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.201912 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.201934 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:33Z","lastTransitionTime":"2026-01-31T14:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.305583 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.305648 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.305667 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.305692 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.305738 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:33Z","lastTransitionTime":"2026-01-31T14:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.408814 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.408893 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.408912 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.408939 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.408957 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:33Z","lastTransitionTime":"2026-01-31T14:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.512617 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.512789 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.512870 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.512907 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.512984 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:33Z","lastTransitionTime":"2026-01-31T14:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.616477 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.616531 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.616547 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.616569 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.616586 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:33Z","lastTransitionTime":"2026-01-31T14:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.720198 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.720259 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.720278 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.720301 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.720318 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:33Z","lastTransitionTime":"2026-01-31T14:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.823214 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.823262 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.823278 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.823301 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.823318 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:33Z","lastTransitionTime":"2026-01-31T14:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.926665 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.926757 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.926775 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.926797 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.926818 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:33Z","lastTransitionTime":"2026-01-31T14:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.027315 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 20:07:08.655272256 +0000 UTC Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.029459 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.029510 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.029561 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.029583 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.029599 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:34Z","lastTransitionTime":"2026-01-31T14:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.133575 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.133638 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.133662 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.133808 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.133838 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:34Z","lastTransitionTime":"2026-01-31T14:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.236835 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.236893 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.236910 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.236955 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.236973 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:34Z","lastTransitionTime":"2026-01-31T14:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.340343 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.340435 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.340458 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.340479 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.340526 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:34Z","lastTransitionTime":"2026-01-31T14:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.443498 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.443568 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.443587 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.443611 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.443629 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:34Z","lastTransitionTime":"2026-01-31T14:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.546391 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.546462 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.546475 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.546494 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.546506 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:34Z","lastTransitionTime":"2026-01-31T14:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.649680 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.649733 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.649741 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.649755 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.649765 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:34Z","lastTransitionTime":"2026-01-31T14:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.752867 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.752968 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.752997 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.753028 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.753048 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:34Z","lastTransitionTime":"2026-01-31T14:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.856410 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.856475 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.856492 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.856515 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.856531 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:34Z","lastTransitionTime":"2026-01-31T14:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.959854 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.959897 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.959906 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.959925 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.959935 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:34Z","lastTransitionTime":"2026-01-31T14:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.028159 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 18:13:26.981004343 +0000 UTC Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.041629 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.041733 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:35 crc kubenswrapper[4763]: E0131 14:55:35.041874 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.041903 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.042028 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:35 crc kubenswrapper[4763]: E0131 14:55:35.042031 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:35 crc kubenswrapper[4763]: E0131 14:55:35.042132 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:35 crc kubenswrapper[4763]: E0131 14:55:35.042273 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.062724 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.062774 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.062792 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.062814 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.062831 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:35Z","lastTransitionTime":"2026-01-31T14:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.165463 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.165488 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.165496 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.165508 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.165515 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:35Z","lastTransitionTime":"2026-01-31T14:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.273889 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.273961 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.273980 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.274004 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.274031 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:35Z","lastTransitionTime":"2026-01-31T14:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.377126 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.377205 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.377230 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.377266 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.377295 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:35Z","lastTransitionTime":"2026-01-31T14:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.480302 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.480361 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.480378 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.480402 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.480420 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:35Z","lastTransitionTime":"2026-01-31T14:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.583533 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.583573 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.583585 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.583602 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.583614 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:35Z","lastTransitionTime":"2026-01-31T14:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.686654 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.686867 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.686897 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.686932 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.686953 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:35Z","lastTransitionTime":"2026-01-31T14:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.790150 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.790215 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.790238 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.790269 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.790289 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:35Z","lastTransitionTime":"2026-01-31T14:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.892941 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.893009 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.893027 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.893056 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.893076 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:35Z","lastTransitionTime":"2026-01-31T14:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.995971 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.996035 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.996058 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.996088 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.996110 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:35Z","lastTransitionTime":"2026-01-31T14:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.028793 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 22:43:23.15623109 +0000 UTC Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.099064 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.099114 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.099136 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.099163 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.099183 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:36Z","lastTransitionTime":"2026-01-31T14:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.202118 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.202186 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.202204 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.202228 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.202248 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:36Z","lastTransitionTime":"2026-01-31T14:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.304828 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.304900 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.304923 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.304951 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.304971 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:36Z","lastTransitionTime":"2026-01-31T14:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.409751 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.409809 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.409831 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.409859 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.409879 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:36Z","lastTransitionTime":"2026-01-31T14:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.514967 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.515025 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.515042 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.515066 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.515083 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:36Z","lastTransitionTime":"2026-01-31T14:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.618567 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.618633 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.618651 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.618675 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.618743 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:36Z","lastTransitionTime":"2026-01-31T14:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.722324 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.722380 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.722396 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.722421 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.722441 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:36Z","lastTransitionTime":"2026-01-31T14:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.825728 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.825819 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.825842 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.825871 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.825891 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:36Z","lastTransitionTime":"2026-01-31T14:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.928731 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.928781 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.928798 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.928849 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.928868 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:36Z","lastTransitionTime":"2026-01-31T14:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.029400 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 16:28:01.027676508 +0000 UTC Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.031255 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.031298 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.031314 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.031338 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.031355 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:37Z","lastTransitionTime":"2026-01-31T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.041969 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:37 crc kubenswrapper[4763]: E0131 14:55:37.042117 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.042337 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.042414 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:37 crc kubenswrapper[4763]: E0131 14:55:37.042541 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.042608 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:37 crc kubenswrapper[4763]: E0131 14:55:37.042746 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:37 crc kubenswrapper[4763]: E0131 14:55:37.042863 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.134212 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.134289 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.134311 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.134339 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.134360 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:37Z","lastTransitionTime":"2026-01-31T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.220133 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.220193 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.220209 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.220233 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.220250 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:37Z","lastTransitionTime":"2026-01-31T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:37 crc kubenswrapper[4763]: E0131 14:55:37.239084 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:37Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.243860 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.243908 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.243927 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.243948 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.243967 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:37Z","lastTransitionTime":"2026-01-31T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:37 crc kubenswrapper[4763]: E0131 14:55:37.270958 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:37Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.276391 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.276478 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.276496 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.276518 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.276538 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:37Z","lastTransitionTime":"2026-01-31T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:37 crc kubenswrapper[4763]: E0131 14:55:37.297731 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:37Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.302917 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.302966 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.302982 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.303004 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.303021 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:37Z","lastTransitionTime":"2026-01-31T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:37 crc kubenswrapper[4763]: E0131 14:55:37.320956 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:37Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.325538 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.325599 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.325617 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.325643 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.325661 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:37Z","lastTransitionTime":"2026-01-31T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:37 crc kubenswrapper[4763]: E0131 14:55:37.342671 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:37Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:37 crc kubenswrapper[4763]: E0131 14:55:37.342931 4763 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.345639 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.345690 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.345714 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.345730 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.345743 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:37Z","lastTransitionTime":"2026-01-31T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.448632 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.448718 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.448738 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.448763 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.448783 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:37Z","lastTransitionTime":"2026-01-31T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.551198 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.551254 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.551275 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.551301 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.551319 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:37Z","lastTransitionTime":"2026-01-31T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.654870 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.654995 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.655018 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.655049 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.655069 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:37Z","lastTransitionTime":"2026-01-31T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.757927 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.757992 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.758011 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.758034 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.758051 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:37Z","lastTransitionTime":"2026-01-31T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.860921 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.860988 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.861011 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.861040 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.861085 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:37Z","lastTransitionTime":"2026-01-31T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.963495 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.963823 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.963888 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.963913 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.963935 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:37Z","lastTransitionTime":"2026-01-31T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.030376 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 01:07:46.975664357 +0000 UTC Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.066372 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.066470 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.066489 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.066510 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.066529 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:38Z","lastTransitionTime":"2026-01-31T14:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.169237 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.169300 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.169317 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.169337 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.169354 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:38Z","lastTransitionTime":"2026-01-31T14:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.271369 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.271431 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.271443 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.271479 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.271489 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:38Z","lastTransitionTime":"2026-01-31T14:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.373563 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.373613 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.373630 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.373653 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.373670 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:38Z","lastTransitionTime":"2026-01-31T14:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.476853 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.476917 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.477004 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.477040 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.477063 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:38Z","lastTransitionTime":"2026-01-31T14:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.580519 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.580600 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.580615 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.580635 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.580651 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:38Z","lastTransitionTime":"2026-01-31T14:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.684188 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.684248 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.684266 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.684290 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.684307 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:38Z","lastTransitionTime":"2026-01-31T14:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.787410 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.787437 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.787445 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.787458 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.787467 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:38Z","lastTransitionTime":"2026-01-31T14:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.890035 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.890085 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.890100 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.890115 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.890125 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:38Z","lastTransitionTime":"2026-01-31T14:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.994804 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.994872 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.994890 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.994916 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.995410 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:38Z","lastTransitionTime":"2026-01-31T14:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.031407 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 08:44:04.662399573 +0000 UTC Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.041124 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.041263 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:39 crc kubenswrapper[4763]: E0131 14:55:39.041362 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.041437 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:39 crc kubenswrapper[4763]: E0131 14:55:39.041588 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.041663 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:39 crc kubenswrapper[4763]: E0131 14:55:39.041812 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:39 crc kubenswrapper[4763]: E0131 14:55:39.041984 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.098186 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.098229 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.098239 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.098254 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.098263 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:39Z","lastTransitionTime":"2026-01-31T14:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.201480 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.201524 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.201534 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.201554 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.201566 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:39Z","lastTransitionTime":"2026-01-31T14:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.304541 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.304579 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.304590 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.304606 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.304617 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:39Z","lastTransitionTime":"2026-01-31T14:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.406959 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.407013 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.407030 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.407055 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.407073 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:39Z","lastTransitionTime":"2026-01-31T14:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.510183 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.510292 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.510320 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.510352 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.510929 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:39Z","lastTransitionTime":"2026-01-31T14:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.614642 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.614686 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.614740 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.614762 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.614779 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:39Z","lastTransitionTime":"2026-01-31T14:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.717436 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.717486 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.717502 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.717524 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.717544 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:39Z","lastTransitionTime":"2026-01-31T14:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.820282 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.820335 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.820346 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.820361 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.820370 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:39Z","lastTransitionTime":"2026-01-31T14:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.923466 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.923526 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.923552 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.923582 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.923600 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:39Z","lastTransitionTime":"2026-01-31T14:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.026573 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.026632 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.026649 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.026674 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.026721 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:40Z","lastTransitionTime":"2026-01-31T14:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.032054 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 16:01:21.922183479 +0000 UTC Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.131272 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.131336 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.131352 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.131376 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.131394 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:40Z","lastTransitionTime":"2026-01-31T14:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.234289 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.234370 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.234392 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.234422 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.234445 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:40Z","lastTransitionTime":"2026-01-31T14:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.336876 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.336949 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.336967 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.336995 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.337014 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:40Z","lastTransitionTime":"2026-01-31T14:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.439809 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.439862 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.439879 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.439903 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.439919 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:40Z","lastTransitionTime":"2026-01-31T14:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.542334 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.542415 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.542444 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.542480 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.542501 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:40Z","lastTransitionTime":"2026-01-31T14:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.646430 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.646530 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.646558 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.646586 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.646604 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:40Z","lastTransitionTime":"2026-01-31T14:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.749742 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.749802 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.749829 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.749852 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.749867 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:40Z","lastTransitionTime":"2026-01-31T14:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.852961 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.853031 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.853053 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.853082 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.853102 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:40Z","lastTransitionTime":"2026-01-31T14:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.956604 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.956672 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.956690 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.956743 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.956760 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:40Z","lastTransitionTime":"2026-01-31T14:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.032759 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 22:03:37.898908558 +0000 UTC Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.041546 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.041600 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.041766 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:41 crc kubenswrapper[4763]: E0131 14:55:41.041751 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:41 crc kubenswrapper[4763]: E0131 14:55:41.041852 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:41 crc kubenswrapper[4763]: E0131 14:55:41.042008 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.042058 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:41 crc kubenswrapper[4763]: E0131 14:55:41.042153 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.059480 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.059535 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.059552 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.059583 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.059604 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:41Z","lastTransitionTime":"2026-01-31T14:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.077908 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:41Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.095168 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:41Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.115481 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:41Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.130514 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:41Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.146670 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:41Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.161907 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.161941 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.161949 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.161969 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.161979 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:41Z","lastTransitionTime":"2026-01-31T14:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.169969 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\":190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:27.057627 6365 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:55:27.057670 6365 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:27.057650 6365 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 14:55:27.057676 6365 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:27.057685 6365 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:55:27.057715 6365 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 14:55:27.057680 6365 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:27.057645 6365 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:55:27.057741 6365 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 14:55:27.057774 6365 factory.go:656] Stopping watch factory\\\\nI0131 14:55:27.057790 6365 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:55:27.058116 6365 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 14:55:27.058250 6365 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 14:55:27.058320 6365 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:55:27.058365 6365 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 14:55:27.058474 6365 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:41Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.188594 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dac69ab-f0fe-47f2-b03a-a78a0ede4fdf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a88a2346ee1adf65c2806d13514d8c012ea043785bf808123e8639f67f956f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98b72a7d666db4ba071d18272e07a6c063ed4a68c874d76621958647a9ac43fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://841434ac2aa4cdb9dc1a36bb53d7b10fa7b1b70602a81d0a2aef23bd1ededc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:41Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.205712 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:41Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.221583 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:41Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.233882 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:41Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.246392 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:41Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.260151 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:41Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.264859 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.264895 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.264902 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.264917 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.264930 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:41Z","lastTransitionTime":"2026-01-31T14:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.274413 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:41Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.287828 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26pm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84302428-88e1-47ba-84cc-7d12472f9aa2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26pm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:41Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.302813 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:41Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.315139 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:41Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.329825 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:41Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.343967 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07280471-d907-4c1f-a38f-9337ecb04b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a68a90c84c384016b30e31a0dae292a36a85338731f096181443deec0a036e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1aa5110b30c48bccc0d22e7e40da48a9f0709ac2a79ffc9afce33e6113b069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8lmbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:41Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.367169 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.367228 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.367242 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.367261 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.367273 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:41Z","lastTransitionTime":"2026-01-31T14:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.469753 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.469827 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.469858 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.469885 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.469903 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:41Z","lastTransitionTime":"2026-01-31T14:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.573256 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.573323 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.573341 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.573366 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.573384 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:41Z","lastTransitionTime":"2026-01-31T14:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.677357 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.677420 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.677443 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.677475 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.677498 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:41Z","lastTransitionTime":"2026-01-31T14:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.780035 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.780096 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.780114 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.780139 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.780156 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:41Z","lastTransitionTime":"2026-01-31T14:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.883735 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.883815 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.883840 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.883869 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.883891 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:41Z","lastTransitionTime":"2026-01-31T14:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.986339 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.986845 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.986948 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.987022 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.987092 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:41Z","lastTransitionTime":"2026-01-31T14:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.033311 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 21:56:28.829325571 +0000 UTC Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.090265 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.090296 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.090306 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.090322 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.090332 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:42Z","lastTransitionTime":"2026-01-31T14:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.193185 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.193551 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.193795 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.194006 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.194194 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:42Z","lastTransitionTime":"2026-01-31T14:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.297344 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.297405 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.297423 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.297447 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.297464 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:42Z","lastTransitionTime":"2026-01-31T14:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.399801 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.399846 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.399856 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.399873 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.399886 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:42Z","lastTransitionTime":"2026-01-31T14:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.502842 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.502904 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.502923 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.502948 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.502967 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:42Z","lastTransitionTime":"2026-01-31T14:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.606429 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.606506 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.606528 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.606561 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.606585 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:42Z","lastTransitionTime":"2026-01-31T14:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.709065 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.709128 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.709143 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.709163 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.709176 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:42Z","lastTransitionTime":"2026-01-31T14:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.812056 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.812110 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.812129 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.812151 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.812168 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:42Z","lastTransitionTime":"2026-01-31T14:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.914769 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.914835 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.914861 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.914886 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.914904 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:42Z","lastTransitionTime":"2026-01-31T14:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.017978 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.018057 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.018079 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.018108 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.018128 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:43Z","lastTransitionTime":"2026-01-31T14:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.033789 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 11:27:36.902172942 +0000 UTC Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.041300 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.041381 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.041773 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.041798 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:43 crc kubenswrapper[4763]: E0131 14:55:43.042084 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:43 crc kubenswrapper[4763]: E0131 14:55:43.042129 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:43 crc kubenswrapper[4763]: E0131 14:55:43.042231 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:43 crc kubenswrapper[4763]: E0131 14:55:43.042325 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.042990 4763 scope.go:117] "RemoveContainer" containerID="e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25" Jan 31 14:55:43 crc kubenswrapper[4763]: E0131 14:55:43.043324 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.121537 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.121588 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.121606 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.121630 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.121648 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:43Z","lastTransitionTime":"2026-01-31T14:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.225539 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.225604 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.225627 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.225657 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.225678 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:43Z","lastTransitionTime":"2026-01-31T14:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.328345 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.328434 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.328449 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.328469 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.328482 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:43Z","lastTransitionTime":"2026-01-31T14:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.431798 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.431869 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.431887 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.431911 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.431930 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:43Z","lastTransitionTime":"2026-01-31T14:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.534727 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.534868 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.534901 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.534932 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.534955 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:43Z","lastTransitionTime":"2026-01-31T14:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.639065 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.639128 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.639146 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.639173 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.639192 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:43Z","lastTransitionTime":"2026-01-31T14:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.742409 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.742472 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.742532 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.742565 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.742589 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:43Z","lastTransitionTime":"2026-01-31T14:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.846335 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.846384 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.846400 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.846421 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.846439 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:43Z","lastTransitionTime":"2026-01-31T14:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.950130 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.950184 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.950200 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.950224 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.950240 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:43Z","lastTransitionTime":"2026-01-31T14:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.034801 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 10:47:55.8535442 +0000 UTC Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.053754 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.053800 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.053817 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.053839 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.053855 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:44Z","lastTransitionTime":"2026-01-31T14:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.157219 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.157290 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.157933 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.157967 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.157993 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:44Z","lastTransitionTime":"2026-01-31T14:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.260391 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.260424 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.260435 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.260449 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.260459 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:44Z","lastTransitionTime":"2026-01-31T14:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.362931 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.362959 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.362967 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.362979 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.362988 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:44Z","lastTransitionTime":"2026-01-31T14:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.465672 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.465726 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.465735 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.465752 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.465760 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:44Z","lastTransitionTime":"2026-01-31T14:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.568482 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.568517 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.568530 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.568547 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.568559 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:44Z","lastTransitionTime":"2026-01-31T14:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.671965 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.672005 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.672013 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.672028 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.672039 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:44Z","lastTransitionTime":"2026-01-31T14:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.775017 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.775090 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.775113 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.775142 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.775163 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:44Z","lastTransitionTime":"2026-01-31T14:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.878354 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.878410 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.878444 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.878472 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.878493 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:44Z","lastTransitionTime":"2026-01-31T14:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.981147 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.981191 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.981209 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.981231 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.981241 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:44Z","lastTransitionTime":"2026-01-31T14:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.035210 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 01:32:22.933483628 +0000 UTC Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.041673 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.041813 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.041930 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:45 crc kubenswrapper[4763]: E0131 14:55:45.041937 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.041994 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:45 crc kubenswrapper[4763]: E0131 14:55:45.042062 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:45 crc kubenswrapper[4763]: E0131 14:55:45.042175 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:45 crc kubenswrapper[4763]: E0131 14:55:45.042250 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.084342 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.084407 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.084430 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.084460 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.084482 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:45Z","lastTransitionTime":"2026-01-31T14:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.187381 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.187437 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.187452 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.187473 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.187487 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:45Z","lastTransitionTime":"2026-01-31T14:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.290189 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.290252 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.290270 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.290294 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.290311 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:45Z","lastTransitionTime":"2026-01-31T14:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.392749 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.392803 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.392816 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.392834 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.392846 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:45Z","lastTransitionTime":"2026-01-31T14:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.495382 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.495449 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.495468 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.495493 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.495510 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:45Z","lastTransitionTime":"2026-01-31T14:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.597898 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.597950 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.597967 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.597992 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.598012 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:45Z","lastTransitionTime":"2026-01-31T14:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.700844 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.700911 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.700927 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.700949 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.700965 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:45Z","lastTransitionTime":"2026-01-31T14:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.803336 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.803404 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.803425 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.803454 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.803477 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:45Z","lastTransitionTime":"2026-01-31T14:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.905925 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.906001 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.906033 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.906080 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.906104 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:45Z","lastTransitionTime":"2026-01-31T14:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.008792 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.008860 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.008878 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.008904 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.008922 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:46Z","lastTransitionTime":"2026-01-31T14:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.035592 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 09:59:23.576747983 +0000 UTC Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.111161 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.111193 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.111201 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.111214 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.111223 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:46Z","lastTransitionTime":"2026-01-31T14:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.213226 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.213280 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.213296 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.213324 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.213340 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:46Z","lastTransitionTime":"2026-01-31T14:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.315737 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.315788 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.315796 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.315812 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.315821 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:46Z","lastTransitionTime":"2026-01-31T14:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.419137 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.419233 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.419255 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.419876 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.419937 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:46Z","lastTransitionTime":"2026-01-31T14:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.523246 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.523283 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.523294 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.523309 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.523320 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:46Z","lastTransitionTime":"2026-01-31T14:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.627894 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.627958 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.627974 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.627996 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.628013 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:46Z","lastTransitionTime":"2026-01-31T14:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.730395 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.730436 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.730444 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.730458 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.730467 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:46Z","lastTransitionTime":"2026-01-31T14:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.832915 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.832965 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.832973 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.832990 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.832998 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:46Z","lastTransitionTime":"2026-01-31T14:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.935173 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.935226 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.935244 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.935271 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.935286 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:46Z","lastTransitionTime":"2026-01-31T14:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.944906 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs\") pod \"network-metrics-daemon-26pm5\" (UID: \"84302428-88e1-47ba-84cc-7d12472f9aa2\") " pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:46 crc kubenswrapper[4763]: E0131 14:55:46.945105 4763 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:55:46 crc kubenswrapper[4763]: E0131 14:55:46.945253 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs podName:84302428-88e1-47ba-84cc-7d12472f9aa2 nodeName:}" failed. No retries permitted until 2026-01-31 14:56:18.945188605 +0000 UTC m=+98.699926938 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs") pod "network-metrics-daemon-26pm5" (UID: "84302428-88e1-47ba-84cc-7d12472f9aa2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.035953 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 03:40:55.36656374 +0000 UTC Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.038097 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.038165 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.038184 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.038209 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.038227 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:47Z","lastTransitionTime":"2026-01-31T14:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.041430 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.041484 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.041515 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.041642 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:47 crc kubenswrapper[4763]: E0131 14:55:47.041634 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:47 crc kubenswrapper[4763]: E0131 14:55:47.041848 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:47 crc kubenswrapper[4763]: E0131 14:55:47.042083 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:47 crc kubenswrapper[4763]: E0131 14:55:47.042174 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.142231 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.142304 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.142329 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.142359 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.142382 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:47Z","lastTransitionTime":"2026-01-31T14:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.244491 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.244558 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.244573 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.244595 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.244610 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:47Z","lastTransitionTime":"2026-01-31T14:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.347331 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.347394 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.347412 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.347439 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.347458 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:47Z","lastTransitionTime":"2026-01-31T14:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.449327 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.449376 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.449387 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.449403 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.449415 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:47Z","lastTransitionTime":"2026-01-31T14:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.551227 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.551296 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.551314 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.551343 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.551361 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:47Z","lastTransitionTime":"2026-01-31T14:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.653519 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.653552 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.653567 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.653585 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.653597 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:47Z","lastTransitionTime":"2026-01-31T14:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.698514 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.698550 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.698564 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.698576 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.698584 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:47Z","lastTransitionTime":"2026-01-31T14:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:47 crc kubenswrapper[4763]: E0131 14:55:47.717441 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:47Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.721811 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.721847 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.721858 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.721873 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.721883 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:47Z","lastTransitionTime":"2026-01-31T14:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:47 crc kubenswrapper[4763]: E0131 14:55:47.739737 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:47Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.743367 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.743431 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.743452 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.743477 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.743494 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:47Z","lastTransitionTime":"2026-01-31T14:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:47 crc kubenswrapper[4763]: E0131 14:55:47.763183 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:47Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.767241 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.767270 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.767284 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.767301 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.767312 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:47Z","lastTransitionTime":"2026-01-31T14:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:47 crc kubenswrapper[4763]: E0131 14:55:47.783306 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:47Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.787182 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.787215 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.787227 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.787241 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.787252 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:47Z","lastTransitionTime":"2026-01-31T14:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:47 crc kubenswrapper[4763]: E0131 14:55:47.809002 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:47Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:47 crc kubenswrapper[4763]: E0131 14:55:47.809160 4763 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.810540 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.810571 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.810582 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.810599 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.810612 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:47Z","lastTransitionTime":"2026-01-31T14:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.913113 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.913157 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.913176 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.913199 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.913216 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:47Z","lastTransitionTime":"2026-01-31T14:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.015634 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.015681 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.015730 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.015753 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.015769 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:48Z","lastTransitionTime":"2026-01-31T14:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.036529 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 19:03:00.341248438 +0000 UTC Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.118956 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.119016 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.119032 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.119054 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.119069 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:48Z","lastTransitionTime":"2026-01-31T14:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.221162 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.221224 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.221238 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.221256 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.221268 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:48Z","lastTransitionTime":"2026-01-31T14:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.323654 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.323782 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.323810 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.323836 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.323853 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:48Z","lastTransitionTime":"2026-01-31T14:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.426387 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.426460 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.426484 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.426512 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.426533 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:48Z","lastTransitionTime":"2026-01-31T14:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.496741 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qzkhg_2335d04f-10b2-4cf8-aae6-236650539c74/kube-multus/0.log" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.496823 4763 generic.go:334] "Generic (PLEG): container finished" podID="2335d04f-10b2-4cf8-aae6-236650539c74" containerID="e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947" exitCode=1 Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.496857 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qzkhg" event={"ID":"2335d04f-10b2-4cf8-aae6-236650539c74","Type":"ContainerDied","Data":"e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947"} Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.497257 4763 scope.go:117] "RemoveContainer" containerID="e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.513177 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.525306 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.529687 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.529790 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.529807 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.529830 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.529850 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:48Z","lastTransitionTime":"2026-01-31T14:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.538238 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.552598 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.572748 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\":190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:27.057627 6365 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:55:27.057670 6365 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:27.057650 6365 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 14:55:27.057676 6365 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:27.057685 6365 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:55:27.057715 6365 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 14:55:27.057680 6365 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:27.057645 6365 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:55:27.057741 6365 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 14:55:27.057774 6365 factory.go:656] Stopping watch factory\\\\nI0131 14:55:27.057790 6365 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:55:27.058116 6365 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 14:55:27.058250 6365 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 14:55:27.058320 6365 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:55:27.058365 6365 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 14:55:27.058474 6365 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.588979 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dac69ab-f0fe-47f2-b03a-a78a0ede4fdf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a88a2346ee1adf65c2806d13514d8c012ea043785bf808123e8639f67f956f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98b72a7d666db4ba071d18272e07a6c063ed4a68c874d76621958647a9ac43fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://841434ac2aa4cdb9dc1a36bb53d7b10fa7b1b70602a81d0a2aef23bd1ededc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.620959 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.632936 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.633007 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.633030 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.633058 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.633081 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:48Z","lastTransitionTime":"2026-01-31T14:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.646327 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.667223 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.685163 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.700771 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.716585 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.726417 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26pm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84302428-88e1-47ba-84cc-7d12472f9aa2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26pm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.735911 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.735954 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.735968 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.735988 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.735999 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:48Z","lastTransitionTime":"2026-01-31T14:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.740356 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.755906 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:48Z\\\",\\\"message\\\":\\\"2026-01-31T14:55:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_313e889b-454e-47d4-acdf-ae13f21667c6\\\\n2026-01-31T14:55:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_313e889b-454e-47d4-acdf-ae13f21667c6 to /host/opt/cni/bin/\\\\n2026-01-31T14:55:03Z [verbose] multus-daemon started\\\\n2026-01-31T14:55:03Z [verbose] Readiness Indicator file check\\\\n2026-01-31T14:55:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.769826 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.786486 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07280471-d907-4c1f-a38f-9337ecb04b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a68a90c84c384016b30e31a0dae292a36a85338731f096181443deec0a036e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1aa5110b30c48bccc0d22e7e40da48a9f0709ac2a79ffc9afce33e6113b069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8lmbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.797397 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.838556 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.838621 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.838639 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.838665 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.838682 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:48Z","lastTransitionTime":"2026-01-31T14:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.941019 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.941049 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.941057 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.941073 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.941082 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:48Z","lastTransitionTime":"2026-01-31T14:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.036799 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 02:02:47.456049567 +0000 UTC Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.041239 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:49 crc kubenswrapper[4763]: E0131 14:55:49.041459 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.041893 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:49 crc kubenswrapper[4763]: E0131 14:55:49.042034 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.042202 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.042200 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:49 crc kubenswrapper[4763]: E0131 14:55:49.042362 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:49 crc kubenswrapper[4763]: E0131 14:55:49.042485 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.043724 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.043751 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.043764 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.043780 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.043791 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:49Z","lastTransitionTime":"2026-01-31T14:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.146401 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.146445 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.146457 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.146474 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.146487 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:49Z","lastTransitionTime":"2026-01-31T14:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.248637 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.248808 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.248827 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.248849 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.248867 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:49Z","lastTransitionTime":"2026-01-31T14:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.351040 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.351094 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.351124 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.351138 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.351147 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:49Z","lastTransitionTime":"2026-01-31T14:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.454136 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.454197 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.454223 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.454256 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.454273 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:49Z","lastTransitionTime":"2026-01-31T14:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.501985 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qzkhg_2335d04f-10b2-4cf8-aae6-236650539c74/kube-multus/0.log" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.502040 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qzkhg" event={"ID":"2335d04f-10b2-4cf8-aae6-236650539c74","Type":"ContainerStarted","Data":"ab1c5cd9f9fb249e12361258bc6e5e83c1ac27131f803a258757f251382f9d44"} Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.516171 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26pm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84302428-88e1-47ba-84cc-7d12472f9aa2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26pm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.536660 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.557084 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1c5cd9f9fb249e12361258bc6e5e83c1ac27131f803a258757f251382f9d44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:48Z\\\",\\\"message\\\":\\\"2026-01-31T14:55:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_313e889b-454e-47d4-acdf-ae13f21667c6\\\\n2026-01-31T14:55:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_313e889b-454e-47d4-acdf-ae13f21667c6 to /host/opt/cni/bin/\\\\n2026-01-31T14:55:03Z [verbose] multus-daemon started\\\\n2026-01-31T14:55:03Z [verbose] Readiness Indicator file check\\\\n2026-01-31T14:55:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.557230 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.557292 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.557319 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.557355 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.557380 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:49Z","lastTransitionTime":"2026-01-31T14:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.574348 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.590102 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07280471-d907-4c1f-a38f-9337ecb04b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a68a90c84c384016b30e31a0dae292a36a85338731f096181443deec0a036e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1aa5110b30c48bccc0d22e7e40da48a9f0709ac2a79ffc9afce33e6113b069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8lmbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.606512 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.621557 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.638126 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.650800 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.660075 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.660147 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.660170 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.660196 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.660215 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:49Z","lastTransitionTime":"2026-01-31T14:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.665149 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.695214 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\":190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:27.057627 6365 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:55:27.057670 6365 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:27.057650 6365 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 14:55:27.057676 6365 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:27.057685 6365 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:55:27.057715 6365 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 14:55:27.057680 6365 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:27.057645 6365 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:55:27.057741 6365 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 14:55:27.057774 6365 factory.go:656] Stopping watch factory\\\\nI0131 14:55:27.057790 6365 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:55:27.058116 6365 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 14:55:27.058250 6365 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 14:55:27.058320 6365 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:55:27.058365 6365 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 14:55:27.058474 6365 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.711058 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dac69ab-f0fe-47f2-b03a-a78a0ede4fdf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a88a2346ee1adf65c2806d13514d8c012ea043785bf808123e8639f67f956f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98b72a7d666db4ba071d18272e07a6c063ed4a68c874d76621958647a9ac43fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://841434ac2aa4cdb9dc1a36bb53d7b10fa7b1b70602a81d0a2aef23bd1ededc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.743393 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.763222 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.763332 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.763360 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.763370 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.763384 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.763394 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:49Z","lastTransitionTime":"2026-01-31T14:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.781602 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.798968 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.815587 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.836131 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.865199 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.865237 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.865248 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.865264 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.865276 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:49Z","lastTransitionTime":"2026-01-31T14:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.967926 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.968397 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.968430 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.968462 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.968482 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:49Z","lastTransitionTime":"2026-01-31T14:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.037816 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 08:35:10.391208403 +0000 UTC Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.070969 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.071027 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.071043 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.071069 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.071085 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:50Z","lastTransitionTime":"2026-01-31T14:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.174143 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.174174 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.174183 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.174197 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.174226 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:50Z","lastTransitionTime":"2026-01-31T14:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.276329 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.276357 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.276365 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.276378 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.276389 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:50Z","lastTransitionTime":"2026-01-31T14:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.378570 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.378616 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.378626 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.378642 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.378655 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:50Z","lastTransitionTime":"2026-01-31T14:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.480685 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.480738 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.480746 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.480759 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.480769 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:50Z","lastTransitionTime":"2026-01-31T14:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.583604 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.583650 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.583663 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.583691 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.583721 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:50Z","lastTransitionTime":"2026-01-31T14:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.685790 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.685860 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.685874 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.685899 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.685913 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:50Z","lastTransitionTime":"2026-01-31T14:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.789001 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.789078 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.789092 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.789118 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.789136 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:50Z","lastTransitionTime":"2026-01-31T14:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.892024 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.892065 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.892073 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.892089 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.892099 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:50Z","lastTransitionTime":"2026-01-31T14:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.995533 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.995623 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.995638 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.995690 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.995796 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:50Z","lastTransitionTime":"2026-01-31T14:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.037952 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 07:01:00.167466984 +0000 UTC Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.041553 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:51 crc kubenswrapper[4763]: E0131 14:55:51.041659 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.041861 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:51 crc kubenswrapper[4763]: E0131 14:55:51.041951 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.042112 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:51 crc kubenswrapper[4763]: E0131 14:55:51.042186 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.042429 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:51 crc kubenswrapper[4763]: E0131 14:55:51.042508 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.059125 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.075915 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.089840 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.098336 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.098369 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.098380 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.098398 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.098411 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:51Z","lastTransitionTime":"2026-01-31T14:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.101450 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.117041 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.130207 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1c5cd9f9fb249e12361258bc6e5e83c1ac27131f803a258757f251382f9d44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:48Z\\\",\\\"message\\\":\\\"2026-01-31T14:55:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_313e889b-454e-47d4-acdf-ae13f21667c6\\\\n2026-01-31T14:55:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_313e889b-454e-47d4-acdf-ae13f21667c6 to /host/opt/cni/bin/\\\\n2026-01-31T14:55:03Z [verbose] multus-daemon started\\\\n2026-01-31T14:55:03Z [verbose] Readiness Indicator file check\\\\n2026-01-31T14:55:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.142893 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26pm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84302428-88e1-47ba-84cc-7d12472f9aa2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26pm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.160998 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.172417 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.191479 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.200406 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.200443 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.200455 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.200472 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.200486 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:51Z","lastTransitionTime":"2026-01-31T14:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.203067 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07280471-d907-4c1f-a38f-9337ecb04b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a68a90c84c384016b30e31a0dae292a36a85338731f096181443deec0a036e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1aa5110b30c48bccc0d22e7e40da48a9f0709ac2a79ffc9afce33e6113b069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8lmbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.225482 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.244561 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.262030 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.275553 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.289438 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.303097 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.303150 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.303159 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.303175 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.303186 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:51Z","lastTransitionTime":"2026-01-31T14:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.307644 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\":190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:27.057627 6365 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:55:27.057670 6365 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:27.057650 6365 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 14:55:27.057676 6365 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:27.057685 6365 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:55:27.057715 6365 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 14:55:27.057680 6365 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:27.057645 6365 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:55:27.057741 6365 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 14:55:27.057774 6365 factory.go:656] Stopping watch factory\\\\nI0131 14:55:27.057790 6365 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:55:27.058116 6365 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 14:55:27.058250 6365 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 14:55:27.058320 6365 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:55:27.058365 6365 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 14:55:27.058474 6365 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.320811 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dac69ab-f0fe-47f2-b03a-a78a0ede4fdf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a88a2346ee1adf65c2806d13514d8c012ea043785bf808123e8639f67f956f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98b72a7d666db4ba071d18272e07a6c063ed4a68c874d76621958647a9ac43fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://841434ac2aa4cdb9dc1a36bb53d7b10fa7b1b70602a81d0a2aef23bd1ededc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.405466 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.405504 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.405512 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.405529 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.405539 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:51Z","lastTransitionTime":"2026-01-31T14:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.507032 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.507073 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.507086 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.507103 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.507113 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:51Z","lastTransitionTime":"2026-01-31T14:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.610418 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.610455 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.610465 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.610483 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.610497 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:51Z","lastTransitionTime":"2026-01-31T14:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.712489 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.712529 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.712538 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.712552 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.712561 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:51Z","lastTransitionTime":"2026-01-31T14:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.815018 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.815047 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.815058 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.815072 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.815082 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:51Z","lastTransitionTime":"2026-01-31T14:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.917511 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.917584 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.917607 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.917636 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.917657 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:51Z","lastTransitionTime":"2026-01-31T14:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.020231 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.020297 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.020315 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.020339 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.020356 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:52Z","lastTransitionTime":"2026-01-31T14:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.038797 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 05:27:33.13352865 +0000 UTC Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.123435 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.123486 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.123496 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.123512 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.123522 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:52Z","lastTransitionTime":"2026-01-31T14:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.225897 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.225938 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.225951 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.225970 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.225983 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:52Z","lastTransitionTime":"2026-01-31T14:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.328502 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.328547 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.328561 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.328578 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.328591 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:52Z","lastTransitionTime":"2026-01-31T14:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.431663 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.431731 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.431742 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.431789 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.431801 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:52Z","lastTransitionTime":"2026-01-31T14:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.533913 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.533948 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.533960 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.533997 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.534009 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:52Z","lastTransitionTime":"2026-01-31T14:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.636131 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.636184 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.636201 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.636226 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.636243 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:52Z","lastTransitionTime":"2026-01-31T14:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.738492 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.738536 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.738552 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.738576 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.738595 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:52Z","lastTransitionTime":"2026-01-31T14:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.840978 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.841327 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.841342 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.841356 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.841368 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:52Z","lastTransitionTime":"2026-01-31T14:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.943770 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.943806 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.943818 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.943836 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.943848 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:52Z","lastTransitionTime":"2026-01-31T14:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.039262 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 17:34:40.541969388 +0000 UTC Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.041739 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:53 crc kubenswrapper[4763]: E0131 14:55:53.041910 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.042187 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:53 crc kubenswrapper[4763]: E0131 14:55:53.042294 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.042500 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:53 crc kubenswrapper[4763]: E0131 14:55:53.042593 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.042969 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:53 crc kubenswrapper[4763]: E0131 14:55:53.043073 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.045851 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.045884 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.045898 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.045912 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.045923 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:53Z","lastTransitionTime":"2026-01-31T14:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.148774 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.148848 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.148862 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.148880 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.148892 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:53Z","lastTransitionTime":"2026-01-31T14:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.251872 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.251919 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.251938 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.251965 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.251983 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:53Z","lastTransitionTime":"2026-01-31T14:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.354793 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.354841 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.354854 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.354871 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.354884 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:53Z","lastTransitionTime":"2026-01-31T14:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.458072 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.458123 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.458136 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.458155 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.458166 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:53Z","lastTransitionTime":"2026-01-31T14:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.560496 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.560560 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.560579 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.560603 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.560620 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:53Z","lastTransitionTime":"2026-01-31T14:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.663226 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.663322 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.663348 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.663423 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.663452 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:53Z","lastTransitionTime":"2026-01-31T14:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.765862 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.765915 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.765934 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.765958 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.765989 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:53Z","lastTransitionTime":"2026-01-31T14:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.869021 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.869092 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.869110 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.869137 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.869155 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:53Z","lastTransitionTime":"2026-01-31T14:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.971547 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.971618 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.971645 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.971674 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.971729 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:53Z","lastTransitionTime":"2026-01-31T14:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.040030 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 05:39:33.917368299 +0000 UTC Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.074114 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.074160 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.074168 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.074188 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.074197 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:54Z","lastTransitionTime":"2026-01-31T14:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.177981 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.178027 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.178036 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.178051 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.178060 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:54Z","lastTransitionTime":"2026-01-31T14:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.281793 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.281860 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.281879 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.281904 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.281923 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:54Z","lastTransitionTime":"2026-01-31T14:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.385301 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.385368 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.385388 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.385419 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.385443 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:54Z","lastTransitionTime":"2026-01-31T14:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.488392 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.488438 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.488452 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.488471 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.488483 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:54Z","lastTransitionTime":"2026-01-31T14:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.591880 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.591953 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.591966 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.592003 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.592018 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:54Z","lastTransitionTime":"2026-01-31T14:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.694188 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.694268 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.694295 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.694327 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.694349 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:54Z","lastTransitionTime":"2026-01-31T14:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.797268 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.797355 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.797368 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.797392 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.797407 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:54Z","lastTransitionTime":"2026-01-31T14:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.901028 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.901088 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.901101 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.901124 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.901136 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:54Z","lastTransitionTime":"2026-01-31T14:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.003876 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.003934 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.003944 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.003966 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.003981 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:55Z","lastTransitionTime":"2026-01-31T14:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.040766 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 15:18:43.059042521 +0000 UTC Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.040943 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.041061 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:55 crc kubenswrapper[4763]: E0131 14:55:55.041204 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.041238 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.041286 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:55 crc kubenswrapper[4763]: E0131 14:55:55.041412 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:55 crc kubenswrapper[4763]: E0131 14:55:55.041440 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:55 crc kubenswrapper[4763]: E0131 14:55:55.041545 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.107207 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.107239 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.107248 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.107264 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.107276 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:55Z","lastTransitionTime":"2026-01-31T14:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.210936 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.211012 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.211036 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.211068 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.211093 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:55Z","lastTransitionTime":"2026-01-31T14:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.314433 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.314504 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.314526 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.314558 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.314579 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:55Z","lastTransitionTime":"2026-01-31T14:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.417943 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.418012 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.418036 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.418063 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.418079 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:55Z","lastTransitionTime":"2026-01-31T14:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.521077 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.521128 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.521179 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.521214 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.521233 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:55Z","lastTransitionTime":"2026-01-31T14:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.624226 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.624257 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.624268 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.624302 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.624312 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:55Z","lastTransitionTime":"2026-01-31T14:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.726364 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.726427 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.726441 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.726464 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.726479 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:55Z","lastTransitionTime":"2026-01-31T14:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.828671 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.828733 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.828742 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.828756 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.828765 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:55Z","lastTransitionTime":"2026-01-31T14:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.931212 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.931331 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.931356 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.931386 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.931407 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:55Z","lastTransitionTime":"2026-01-31T14:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.035301 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.035394 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.035410 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.035435 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.035451 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:56Z","lastTransitionTime":"2026-01-31T14:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.041967 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 19:00:57.817626118 +0000 UTC Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.139807 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.139896 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.139926 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.139964 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.139987 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:56Z","lastTransitionTime":"2026-01-31T14:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.244234 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.244291 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.244304 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.244330 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.244345 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:56Z","lastTransitionTime":"2026-01-31T14:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.347280 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.347394 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.347414 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.347442 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.347460 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:56Z","lastTransitionTime":"2026-01-31T14:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.450319 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.450403 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.450429 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.450460 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.450487 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:56Z","lastTransitionTime":"2026-01-31T14:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.553964 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.554054 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.554076 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.554107 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.554132 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:56Z","lastTransitionTime":"2026-01-31T14:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.656818 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.656872 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.656887 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.656907 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.656920 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:56Z","lastTransitionTime":"2026-01-31T14:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.761585 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.761655 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.761675 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.761737 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.761757 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:56Z","lastTransitionTime":"2026-01-31T14:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.865684 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.865833 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.865865 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.865907 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.865982 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:56Z","lastTransitionTime":"2026-01-31T14:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.969513 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.969577 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.969593 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.969636 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.969654 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:56Z","lastTransitionTime":"2026-01-31T14:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.041755 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.041805 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.041911 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:57 crc kubenswrapper[4763]: E0131 14:55:57.041938 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.042027 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:57 crc kubenswrapper[4763]: E0131 14:55:57.043004 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:57 crc kubenswrapper[4763]: E0131 14:55:57.042756 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.042195 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 21:47:41.268492047 +0000 UTC Jan 31 14:55:57 crc kubenswrapper[4763]: E0131 14:55:57.043184 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.043257 4763 scope.go:117] "RemoveContainer" containerID="e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.096305 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.096363 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.096385 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.096410 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.096430 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:57Z","lastTransitionTime":"2026-01-31T14:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.203855 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.203913 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.203931 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.203955 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.203973 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:57Z","lastTransitionTime":"2026-01-31T14:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.306567 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.306603 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.306614 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.306630 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.306642 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:57Z","lastTransitionTime":"2026-01-31T14:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.409493 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.409548 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.409564 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.409586 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.409602 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:57Z","lastTransitionTime":"2026-01-31T14:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.512274 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.512305 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.512314 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.512327 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.512337 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:57Z","lastTransitionTime":"2026-01-31T14:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.533083 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovnkube-controller/2.log" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.536442 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerStarted","Data":"c1d9a3481eb12d0b1658e4141e9fa237e24ea0a712155a96e32eb577f03cd862"} Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.537057 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.563158 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.590574 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.610136 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.614182 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.614226 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.614241 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.614256 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.614268 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:57Z","lastTransitionTime":"2026-01-31T14:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.625125 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.636587 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.647780 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.657853 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1c5cd9f9fb249e12361258bc6e5e83c1ac27131f803a258757f251382f9d44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:48Z\\\",\\\"message\\\":\\\"2026-01-31T14:55:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_313e889b-454e-47d4-acdf-ae13f21667c6\\\\n2026-01-31T14:55:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_313e889b-454e-47d4-acdf-ae13f21667c6 to /host/opt/cni/bin/\\\\n2026-01-31T14:55:03Z [verbose] multus-daemon started\\\\n2026-01-31T14:55:03Z [verbose] Readiness Indicator file check\\\\n2026-01-31T14:55:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.665740 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26pm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84302428-88e1-47ba-84cc-7d12472f9aa2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26pm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.673614 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.686135 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.701674 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07280471-d907-4c1f-a38f-9337ecb04b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a68a90c84c384016b30e31a0dae292a36a85338731f096181443deec0a036e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1aa5110b30c48bccc0d22e7e40da48a9f0709ac2a79ffc9afce33e6113b069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8lmbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.712467 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dac69ab-f0fe-47f2-b03a-a78a0ede4fdf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a88a2346ee1adf65c2806d13514d8c012ea043785bf808123e8639f67f956f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98b72a7d666db4ba071d18272e07a6c063ed4a68c874d76621958647a9ac43fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://841434ac2aa4cdb9dc1a36bb53d7b10fa7b1b70602a81d0a2aef23bd1ededc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.715610 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.715634 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.715643 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.715658 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.715670 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:57Z","lastTransitionTime":"2026-01-31T14:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.731532 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.744778 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.756573 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.765714 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.774420 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.792115 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d9a3481eb12d0b1658e4141e9fa237e24ea0a712155a96e32eb577f03cd862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\":190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:27.057627 6365 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:55:27.057670 6365 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:27.057650 6365 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 14:55:27.057676 6365 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:27.057685 6365 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:55:27.057715 6365 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 14:55:27.057680 6365 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:27.057645 6365 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:55:27.057741 6365 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 14:55:27.057774 6365 factory.go:656] Stopping watch factory\\\\nI0131 14:55:27.057790 6365 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:55:27.058116 6365 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 14:55:27.058250 6365 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 14:55:27.058320 6365 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:55:27.058365 6365 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 14:55:27.058474 6365 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.817521 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.817551 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.817562 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.817577 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.817588 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:57Z","lastTransitionTime":"2026-01-31T14:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.835749 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.835772 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.835783 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.835797 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.835807 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:57Z","lastTransitionTime":"2026-01-31T14:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:57 crc kubenswrapper[4763]: E0131 14:55:57.847390 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.850337 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.850369 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.850380 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.850395 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.850406 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:57Z","lastTransitionTime":"2026-01-31T14:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:57 crc kubenswrapper[4763]: E0131 14:55:57.860974 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.863169 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.863196 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.863205 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.863217 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.863225 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:57Z","lastTransitionTime":"2026-01-31T14:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:57 crc kubenswrapper[4763]: E0131 14:55:57.873075 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.876023 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.876054 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.876062 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.876077 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.876086 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:57Z","lastTransitionTime":"2026-01-31T14:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:57 crc kubenswrapper[4763]: E0131 14:55:57.885259 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.888049 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.888127 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.888153 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.888207 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.888231 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:57Z","lastTransitionTime":"2026-01-31T14:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:57 crc kubenswrapper[4763]: E0131 14:55:57.908842 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: E0131 14:55:57.909089 4763 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.920650 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.920730 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.920752 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.920781 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.920801 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:57Z","lastTransitionTime":"2026-01-31T14:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.022904 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.022959 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.022976 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.023001 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.023017 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:58Z","lastTransitionTime":"2026-01-31T14:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.043474 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 05:35:03.728064693 +0000 UTC Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.125450 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.125520 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.125542 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.125574 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.125594 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:58Z","lastTransitionTime":"2026-01-31T14:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.227867 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.227932 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.227959 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.227988 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.228009 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:58Z","lastTransitionTime":"2026-01-31T14:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.330790 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.330845 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.330863 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.330886 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.330902 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:58Z","lastTransitionTime":"2026-01-31T14:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.433192 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.433255 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.433279 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.433300 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.433320 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:58Z","lastTransitionTime":"2026-01-31T14:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.535801 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.535862 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.535878 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.535902 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.535918 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:58Z","lastTransitionTime":"2026-01-31T14:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.541582 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovnkube-controller/3.log" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.542163 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovnkube-controller/2.log" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.545746 4763 generic.go:334] "Generic (PLEG): container finished" podID="047ce610-09fa-482b-8d29-45ad376d12b3" containerID="c1d9a3481eb12d0b1658e4141e9fa237e24ea0a712155a96e32eb577f03cd862" exitCode=1 Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.545810 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerDied","Data":"c1d9a3481eb12d0b1658e4141e9fa237e24ea0a712155a96e32eb577f03cd862"} Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.545874 4763 scope.go:117] "RemoveContainer" containerID="e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.546930 4763 scope.go:117] "RemoveContainer" containerID="c1d9a3481eb12d0b1658e4141e9fa237e24ea0a712155a96e32eb577f03cd862" Jan 31 14:55:58 crc kubenswrapper[4763]: E0131 14:55:58.547251 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.570843 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.587340 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1c5cd9f9fb249e12361258bc6e5e83c1ac27131f803a258757f251382f9d44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:48Z\\\",\\\"message\\\":\\\"2026-01-31T14:55:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_313e889b-454e-47d4-acdf-ae13f21667c6\\\\n2026-01-31T14:55:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_313e889b-454e-47d4-acdf-ae13f21667c6 to /host/opt/cni/bin/\\\\n2026-01-31T14:55:03Z [verbose] multus-daemon started\\\\n2026-01-31T14:55:03Z [verbose] Readiness Indicator file check\\\\n2026-01-31T14:55:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.601411 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26pm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84302428-88e1-47ba-84cc-7d12472f9aa2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26pm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.616580 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.633809 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.638621 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.638682 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.638736 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.638763 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.638786 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:58Z","lastTransitionTime":"2026-01-31T14:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.647662 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07280471-d907-4c1f-a38f-9337ecb04b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a68a90c84c384016b30e31a0dae292a36a85338731f096181443deec0a036e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1aa5110b30c48bccc0d22e7e40da48a9f0709ac2a79ffc9afce33e6113b069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8lmbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.659262 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dac69ab-f0fe-47f2-b03a-a78a0ede4fdf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a88a2346ee1adf65c2806d13514d8c012ea043785bf808123e8639f67f956f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98b72a7d666db4ba071d18272e07a6c063ed4a68c874d76621958647a9ac43fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://841434ac2aa4cdb9dc1a36bb53d7b10fa7b1b70602a81d0a2aef23bd1ededc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.683408 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.698659 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.712877 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.724977 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.739238 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.741350 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.741387 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.741398 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.741415 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.741427 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:58Z","lastTransitionTime":"2026-01-31T14:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.762122 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d9a3481eb12d0b1658e4141e9fa237e24ea0a712155a96e32eb577f03cd862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\":190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:27.057627 6365 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:55:27.057670 6365 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:27.057650 6365 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 14:55:27.057676 6365 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:27.057685 6365 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:55:27.057715 6365 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 14:55:27.057680 6365 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:27.057645 6365 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:55:27.057741 6365 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 14:55:27.057774 6365 factory.go:656] Stopping watch factory\\\\nI0131 14:55:27.057790 6365 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:55:27.058116 6365 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 14:55:27.058250 6365 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 14:55:27.058320 6365 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:55:27.058365 6365 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 14:55:27.058474 6365 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1d9a3481eb12d0b1658e4141e9fa237e24ea0a712155a96e32eb577f03cd862\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"ps://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z]\\\\nI0131 14:55:57.965475 6780 lb_config.go:1031] Cluster endpoints for openshift-marketplace/redhat-marketplace for network=default are: map[]\\\\nI0131 14:55:57.965601 6780 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.777036 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.796079 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.811095 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.826414 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.843466 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.844609 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.844671 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.844730 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.844764 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.844790 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:58Z","lastTransitionTime":"2026-01-31T14:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.947965 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.948010 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.948028 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.948051 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.948068 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:58Z","lastTransitionTime":"2026-01-31T14:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.040996 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.041032 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.041198 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:59 crc kubenswrapper[4763]: E0131 14:55:59.041196 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:59 crc kubenswrapper[4763]: E0131 14:55:59.041327 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:59 crc kubenswrapper[4763]: E0131 14:55:59.041429 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.041516 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:59 crc kubenswrapper[4763]: E0131 14:55:59.041605 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.043857 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 23:03:57.115355959 +0000 UTC Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.050572 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.050630 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.050646 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.050668 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.050685 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:59Z","lastTransitionTime":"2026-01-31T14:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.153393 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.153445 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.153462 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.153485 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.153503 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:59Z","lastTransitionTime":"2026-01-31T14:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.257665 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.257762 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.257782 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.257809 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.257825 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:59Z","lastTransitionTime":"2026-01-31T14:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.360907 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.360978 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.360995 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.361019 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.361037 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:59Z","lastTransitionTime":"2026-01-31T14:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.465758 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.465832 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.465850 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.465875 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.465892 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:59Z","lastTransitionTime":"2026-01-31T14:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.553893 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovnkube-controller/3.log" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.561314 4763 scope.go:117] "RemoveContainer" containerID="c1d9a3481eb12d0b1658e4141e9fa237e24ea0a712155a96e32eb577f03cd862" Jan 31 14:55:59 crc kubenswrapper[4763]: E0131 14:55:59.561785 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.568646 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.568745 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.568769 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.568802 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.568897 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:59Z","lastTransitionTime":"2026-01-31T14:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.582430 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07280471-d907-4c1f-a38f-9337ecb04b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a68a90c84c384016b30e31a0dae292a36a85338731f096181443deec0a036e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1aa5110b30c48bccc0d22e7e40da48a9f0709ac2a79ffc9afce33e6113b069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8lmbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.597623 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.645984 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.671782 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.671826 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.671870 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.671892 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.671908 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:59Z","lastTransitionTime":"2026-01-31T14:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.672521 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.686454 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.700416 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.720475 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d9a3481eb12d0b1658e4141e9fa237e24ea0a712155a96e32eb577f03cd862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1d9a3481eb12d0b1658e4141e9fa237e24ea0a712155a96e32eb577f03cd862\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"ps://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z]\\\\nI0131 14:55:57.965475 6780 lb_config.go:1031] Cluster endpoints for openshift-marketplace/redhat-marketplace for network=default are: map[]\\\\nI0131 14:55:57.965601 6780 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.737091 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dac69ab-f0fe-47f2-b03a-a78a0ede4fdf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a88a2346ee1adf65c2806d13514d8c012ea043785bf808123e8639f67f956f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98b72a7d666db4ba071d18272e07a6c063ed4a68c874d76621958647a9ac43fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://841434ac2aa4cdb9dc1a36bb53d7b10fa7b1b70602a81d0a2aef23bd1ededc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.772932 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.774876 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.774963 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.774983 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.775007 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.775024 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:59Z","lastTransitionTime":"2026-01-31T14:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.797451 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.818190 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.835296 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.855198 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.878157 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.878226 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.878248 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.878280 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.878305 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:59Z","lastTransitionTime":"2026-01-31T14:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.878423 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.900354 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.924021 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.950209 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1c5cd9f9fb249e12361258bc6e5e83c1ac27131f803a258757f251382f9d44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:48Z\\\",\\\"message\\\":\\\"2026-01-31T14:55:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_313e889b-454e-47d4-acdf-ae13f21667c6\\\\n2026-01-31T14:55:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_313e889b-454e-47d4-acdf-ae13f21667c6 to /host/opt/cni/bin/\\\\n2026-01-31T14:55:03Z [verbose] multus-daemon started\\\\n2026-01-31T14:55:03Z [verbose] Readiness Indicator file check\\\\n2026-01-31T14:55:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.967237 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26pm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84302428-88e1-47ba-84cc-7d12472f9aa2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26pm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.982112 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.982194 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.982217 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.982249 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.982272 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:59Z","lastTransitionTime":"2026-01-31T14:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.044923 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 04:55:14.253359862 +0000 UTC Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.085247 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.085311 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.085329 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.085351 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.085369 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:00Z","lastTransitionTime":"2026-01-31T14:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.188135 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.188204 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.188230 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.188258 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.188279 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:00Z","lastTransitionTime":"2026-01-31T14:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.290763 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.290827 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.290848 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.290874 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.290892 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:00Z","lastTransitionTime":"2026-01-31T14:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.393325 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.393393 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.393408 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.393429 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.393444 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:00Z","lastTransitionTime":"2026-01-31T14:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.496924 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.496986 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.497007 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.497062 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.497081 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:00Z","lastTransitionTime":"2026-01-31T14:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.600014 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.600078 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.600096 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.600119 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.600137 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:00Z","lastTransitionTime":"2026-01-31T14:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.703924 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.704032 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.704093 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.704121 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.704139 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:00Z","lastTransitionTime":"2026-01-31T14:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.807359 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.807430 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.807448 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.807473 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.807490 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:00Z","lastTransitionTime":"2026-01-31T14:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.910101 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.910163 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.910174 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.910193 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.910204 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:00Z","lastTransitionTime":"2026-01-31T14:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.012577 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.012663 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.012687 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.012756 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.012781 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:01Z","lastTransitionTime":"2026-01-31T14:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.041307 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:01 crc kubenswrapper[4763]: E0131 14:56:01.041442 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.041510 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:01 crc kubenswrapper[4763]: E0131 14:56:01.041563 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.041967 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:01 crc kubenswrapper[4763]: E0131 14:56:01.042169 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.042551 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:01 crc kubenswrapper[4763]: E0131 14:56:01.042741 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.045299 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 12:12:39.099018448 +0000 UTC Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.060681 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.081206 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.099357 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.115812 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.115866 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.115915 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.115937 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.115952 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:01Z","lastTransitionTime":"2026-01-31T14:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.119146 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.140907 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.158762 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26pm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84302428-88e1-47ba-84cc-7d12472f9aa2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26pm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.180289 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.202069 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1c5cd9f9fb249e12361258bc6e5e83c1ac27131f803a258757f251382f9d44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:48Z\\\",\\\"message\\\":\\\"2026-01-31T14:55:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_313e889b-454e-47d4-acdf-ae13f21667c6\\\\n2026-01-31T14:55:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_313e889b-454e-47d4-acdf-ae13f21667c6 to /host/opt/cni/bin/\\\\n2026-01-31T14:55:03Z [verbose] multus-daemon started\\\\n2026-01-31T14:55:03Z [verbose] Readiness Indicator file check\\\\n2026-01-31T14:55:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.218915 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.218982 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.218999 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.219028 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.219045 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:01Z","lastTransitionTime":"2026-01-31T14:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.225414 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.243332 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07280471-d907-4c1f-a38f-9337ecb04b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a68a90c84c384016b30e31a0dae292a36a85338731f096181443deec0a036e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1aa5110b30c48bccc0d22e7e40da48a9f0709ac2a79ffc9afce33e6113b069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8lmbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.259531 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.279547 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.298902 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.315015 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.322311 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.322374 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.322387 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.322403 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.322415 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:01Z","lastTransitionTime":"2026-01-31T14:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.332915 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.366136 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d9a3481eb12d0b1658e4141e9fa237e24ea0a712155a96e32eb577f03cd862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1d9a3481eb12d0b1658e4141e9fa237e24ea0a712155a96e32eb577f03cd862\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"ps://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z]\\\\nI0131 14:55:57.965475 6780 lb_config.go:1031] Cluster endpoints for openshift-marketplace/redhat-marketplace for network=default are: map[]\\\\nI0131 14:55:57.965601 6780 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.385957 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dac69ab-f0fe-47f2-b03a-a78a0ede4fdf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a88a2346ee1adf65c2806d13514d8c012ea043785bf808123e8639f67f956f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98b72a7d666db4ba071d18272e07a6c063ed4a68c874d76621958647a9ac43fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://841434ac2aa4cdb9dc1a36bb53d7b10fa7b1b70602a81d0a2aef23bd1ededc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.419471 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.425533 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.425607 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.425630 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.425668 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.425756 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:01Z","lastTransitionTime":"2026-01-31T14:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.527934 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.528265 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.528278 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.528295 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.528307 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:01Z","lastTransitionTime":"2026-01-31T14:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.630403 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.630441 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.630452 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.630469 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.630480 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:01Z","lastTransitionTime":"2026-01-31T14:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.733181 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.733417 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.733504 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.733602 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.733669 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:01Z","lastTransitionTime":"2026-01-31T14:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.836679 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.836752 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.836762 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.836784 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.836802 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:01Z","lastTransitionTime":"2026-01-31T14:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.940984 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.941414 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.941565 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.941854 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.942047 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:01Z","lastTransitionTime":"2026-01-31T14:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.045635 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.045722 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.045508 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 10:59:39.5736501 +0000 UTC Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.045741 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.045817 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.045884 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:02Z","lastTransitionTime":"2026-01-31T14:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.069261 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.149195 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.149617 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.149854 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.150019 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.150159 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:02Z","lastTransitionTime":"2026-01-31T14:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.253059 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.253348 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.253432 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.253518 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.253599 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:02Z","lastTransitionTime":"2026-01-31T14:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.356745 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.356806 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.356823 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.356847 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.356862 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:02Z","lastTransitionTime":"2026-01-31T14:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.460546 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.461346 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.461520 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.461660 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.461827 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:02Z","lastTransitionTime":"2026-01-31T14:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.564389 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.564459 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.564481 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.564510 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.564533 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:02Z","lastTransitionTime":"2026-01-31T14:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.667005 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.667046 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.667066 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.667094 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.667109 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:02Z","lastTransitionTime":"2026-01-31T14:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.769404 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.769441 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.769452 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.769468 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.769478 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:02Z","lastTransitionTime":"2026-01-31T14:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.871974 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.872033 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.872051 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.872078 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.872095 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:02Z","lastTransitionTime":"2026-01-31T14:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.974907 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.974965 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.974981 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.975008 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.975025 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:02Z","lastTransitionTime":"2026-01-31T14:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.040996 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.041110 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:03 crc kubenswrapper[4763]: E0131 14:56:03.041166 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:03 crc kubenswrapper[4763]: E0131 14:56:03.041262 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.041287 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.041327 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:03 crc kubenswrapper[4763]: E0131 14:56:03.041470 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:03 crc kubenswrapper[4763]: E0131 14:56:03.041525 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.046000 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 19:47:34.476241421 +0000 UTC Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.077678 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.077785 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.077808 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.077836 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.077858 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:03Z","lastTransitionTime":"2026-01-31T14:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.180556 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.180625 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.180647 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.180674 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.180724 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:03Z","lastTransitionTime":"2026-01-31T14:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.283801 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.283914 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.283941 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.283968 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.283986 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:03Z","lastTransitionTime":"2026-01-31T14:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.387551 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.388156 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.388310 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.388445 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.388576 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:03Z","lastTransitionTime":"2026-01-31T14:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.491212 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.491249 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.491262 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.491278 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.491289 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:03Z","lastTransitionTime":"2026-01-31T14:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.594484 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.594542 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.594564 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.594592 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.594613 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:03Z","lastTransitionTime":"2026-01-31T14:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.697445 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.697504 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.697521 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.697546 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.697567 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:03Z","lastTransitionTime":"2026-01-31T14:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.800373 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.800423 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.800440 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.800466 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.800483 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:03Z","lastTransitionTime":"2026-01-31T14:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.903621 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.903666 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.903678 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.903716 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.903728 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:03Z","lastTransitionTime":"2026-01-31T14:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.970116 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:56:03 crc kubenswrapper[4763]: E0131 14:56:03.970317 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:07.970287971 +0000 UTC m=+147.725026274 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.970783 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.971004 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.971182 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:03 crc kubenswrapper[4763]: E0131 14:56:03.971024 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:56:03 crc kubenswrapper[4763]: E0131 14:56:03.971517 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:56:03 crc kubenswrapper[4763]: E0131 14:56:03.971663 4763 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:56:03 crc kubenswrapper[4763]: E0131 14:56:03.971898 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 14:57:07.971873545 +0000 UTC m=+147.726611868 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:56:03 crc kubenswrapper[4763]: E0131 14:56:03.971081 4763 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:56:03 crc kubenswrapper[4763]: E0131 14:56:03.972212 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:57:07.972195365 +0000 UTC m=+147.726933688 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:56:03 crc kubenswrapper[4763]: E0131 14:56:03.971333 4763 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:56:03 crc kubenswrapper[4763]: E0131 14:56:03.972506 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:57:07.972491403 +0000 UTC m=+147.727229726 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.006333 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.006391 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.006407 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.006429 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.006443 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:04Z","lastTransitionTime":"2026-01-31T14:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.046861 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 21:17:24.409298449 +0000 UTC Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.072764 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:04 crc kubenswrapper[4763]: E0131 14:56:04.072979 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:56:04 crc kubenswrapper[4763]: E0131 14:56:04.073023 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:56:04 crc kubenswrapper[4763]: E0131 14:56:04.073043 4763 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:56:04 crc kubenswrapper[4763]: E0131 14:56:04.073124 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 14:57:08.073100802 +0000 UTC m=+147.827839135 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.109560 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.109596 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.109606 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.109621 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.109631 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:04Z","lastTransitionTime":"2026-01-31T14:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.212795 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.212836 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.212847 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.212864 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.212877 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:04Z","lastTransitionTime":"2026-01-31T14:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.316173 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.316238 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.316261 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.316289 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.316311 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:04Z","lastTransitionTime":"2026-01-31T14:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.419960 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.420020 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.420038 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.420062 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.420084 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:04Z","lastTransitionTime":"2026-01-31T14:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.523447 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.524319 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.524514 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.524548 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.524568 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:04Z","lastTransitionTime":"2026-01-31T14:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.627295 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.627645 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.627854 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.628026 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.628209 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:04Z","lastTransitionTime":"2026-01-31T14:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.733203 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.733372 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.733402 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.733434 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.733475 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:04Z","lastTransitionTime":"2026-01-31T14:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.836453 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.836655 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.836754 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.836866 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.836957 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:04Z","lastTransitionTime":"2026-01-31T14:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.940044 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.940135 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.940158 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.940182 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.940200 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:04Z","lastTransitionTime":"2026-01-31T14:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.041689 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:05 crc kubenswrapper[4763]: E0131 14:56:05.041930 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.041952 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.041753 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:05 crc kubenswrapper[4763]: E0131 14:56:05.042374 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.042460 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:05 crc kubenswrapper[4763]: E0131 14:56:05.042532 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:05 crc kubenswrapper[4763]: E0131 14:56:05.042812 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.044026 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.044049 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.044058 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.044071 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.044082 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:05Z","lastTransitionTime":"2026-01-31T14:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.047393 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 07:51:12.704965482 +0000 UTC Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.147572 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.147639 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.147657 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.147683 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.147744 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:05Z","lastTransitionTime":"2026-01-31T14:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.250408 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.250477 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.250495 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.250521 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.250540 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:05Z","lastTransitionTime":"2026-01-31T14:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.353741 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.353790 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.353805 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.353824 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.353836 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:05Z","lastTransitionTime":"2026-01-31T14:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.456735 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.456773 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.456785 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.456802 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.456814 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:05Z","lastTransitionTime":"2026-01-31T14:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.559828 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.559881 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.559893 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.559911 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.559922 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:05Z","lastTransitionTime":"2026-01-31T14:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.663210 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.663266 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.663292 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.663321 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.663345 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:05Z","lastTransitionTime":"2026-01-31T14:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.766640 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.766727 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.766748 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.766773 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.766791 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:05Z","lastTransitionTime":"2026-01-31T14:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.870370 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.870460 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.870480 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.870506 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.870524 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:05Z","lastTransitionTime":"2026-01-31T14:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.973456 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.973521 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.973539 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.973565 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.973582 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:05Z","lastTransitionTime":"2026-01-31T14:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.048207 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 22:37:33.283191735 +0000 UTC Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.076909 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.076980 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.076998 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.077024 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.077042 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:06Z","lastTransitionTime":"2026-01-31T14:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.179433 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.179495 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.179512 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.179536 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.179555 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:06Z","lastTransitionTime":"2026-01-31T14:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.282304 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.282343 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.282460 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.282482 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.282493 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:06Z","lastTransitionTime":"2026-01-31T14:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.385155 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.385213 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.385229 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.385255 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.385275 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:06Z","lastTransitionTime":"2026-01-31T14:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.488776 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.488842 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.488861 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.488887 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.488905 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:06Z","lastTransitionTime":"2026-01-31T14:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.591912 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.591957 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.591969 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.591988 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.591999 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:06Z","lastTransitionTime":"2026-01-31T14:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.694782 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.694861 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.694884 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.694915 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.694937 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:06Z","lastTransitionTime":"2026-01-31T14:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.797684 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.797766 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.797789 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.797816 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.797835 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:06Z","lastTransitionTime":"2026-01-31T14:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.901214 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.901286 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.901301 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.901325 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.901342 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:06Z","lastTransitionTime":"2026-01-31T14:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.004015 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.004081 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.004102 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.004127 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.004145 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:07Z","lastTransitionTime":"2026-01-31T14:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.041782 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.041882 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.041880 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:07 crc kubenswrapper[4763]: E0131 14:56:07.041954 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.041996 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:07 crc kubenswrapper[4763]: E0131 14:56:07.042238 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:07 crc kubenswrapper[4763]: E0131 14:56:07.042335 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:07 crc kubenswrapper[4763]: E0131 14:56:07.042458 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.048767 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 00:46:52.069127847 +0000 UTC Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.107169 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.107223 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.107240 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.107266 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.107285 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:07Z","lastTransitionTime":"2026-01-31T14:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.210377 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.210437 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.210452 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.210473 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.210492 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:07Z","lastTransitionTime":"2026-01-31T14:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.313822 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.313878 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.313894 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.313915 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.313930 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:07Z","lastTransitionTime":"2026-01-31T14:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.417062 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.417128 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.417150 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.417180 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.417202 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:07Z","lastTransitionTime":"2026-01-31T14:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.520003 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.520086 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.520112 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.520144 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.520166 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:07Z","lastTransitionTime":"2026-01-31T14:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.622725 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.622776 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.622798 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.622828 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.622850 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:07Z","lastTransitionTime":"2026-01-31T14:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.726506 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.726568 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.726587 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.726611 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.726629 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:07Z","lastTransitionTime":"2026-01-31T14:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.829072 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.829123 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.829140 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.829161 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.829178 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:07Z","lastTransitionTime":"2026-01-31T14:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.918114 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.918174 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.918190 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.918212 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.918228 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:07Z","lastTransitionTime":"2026-01-31T14:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:07 crc kubenswrapper[4763]: E0131 14:56:07.936684 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.941471 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.941520 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.941542 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.941568 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.941584 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:07Z","lastTransitionTime":"2026-01-31T14:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:07 crc kubenswrapper[4763]: E0131 14:56:07.961199 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.966785 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.966825 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.966841 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.966863 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.966880 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:07Z","lastTransitionTime":"2026-01-31T14:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:07 crc kubenswrapper[4763]: E0131 14:56:07.987458 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.994459 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.994565 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.994586 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.994649 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.994668 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:07Z","lastTransitionTime":"2026-01-31T14:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:08 crc kubenswrapper[4763]: E0131 14:56:08.016538 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.022419 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.022545 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.022612 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.022643 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.022729 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:08Z","lastTransitionTime":"2026-01-31T14:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:08 crc kubenswrapper[4763]: E0131 14:56:08.044647 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:08 crc kubenswrapper[4763]: E0131 14:56:08.044917 4763 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.047636 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.047744 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.047762 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.047789 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.047837 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:08Z","lastTransitionTime":"2026-01-31T14:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.049794 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 18:13:43.061630106 +0000 UTC Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.150785 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.150881 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.150935 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.150962 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.151012 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:08Z","lastTransitionTime":"2026-01-31T14:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.254559 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.254624 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.254647 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.254676 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.254732 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:08Z","lastTransitionTime":"2026-01-31T14:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.358114 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.358492 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.358510 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.358534 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.358551 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:08Z","lastTransitionTime":"2026-01-31T14:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.462357 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.462428 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.462529 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.462564 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.462588 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:08Z","lastTransitionTime":"2026-01-31T14:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.565580 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.565674 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.565741 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.565769 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.565788 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:08Z","lastTransitionTime":"2026-01-31T14:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.668881 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.668943 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.668957 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.668981 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.668996 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:08Z","lastTransitionTime":"2026-01-31T14:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.772254 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.772289 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.772299 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.772315 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.772326 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:08Z","lastTransitionTime":"2026-01-31T14:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.875634 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.875747 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.875771 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.875797 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.875816 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:08Z","lastTransitionTime":"2026-01-31T14:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.978219 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.978261 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.978272 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.978289 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.978300 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:08Z","lastTransitionTime":"2026-01-31T14:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.041356 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.041553 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:09 crc kubenswrapper[4763]: E0131 14:56:09.041563 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:09 crc kubenswrapper[4763]: E0131 14:56:09.042257 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.042351 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.042392 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:09 crc kubenswrapper[4763]: E0131 14:56:09.042805 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:09 crc kubenswrapper[4763]: E0131 14:56:09.042926 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.050165 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 11:26:38.312271457 +0000 UTC Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.081430 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.081489 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.081505 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.081529 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.081546 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:09Z","lastTransitionTime":"2026-01-31T14:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.186269 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.186417 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.186433 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.186452 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.186469 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:09Z","lastTransitionTime":"2026-01-31T14:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.289728 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.289753 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.289760 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.289772 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.289782 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:09Z","lastTransitionTime":"2026-01-31T14:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.392739 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.393033 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.393125 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.393225 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.393407 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:09Z","lastTransitionTime":"2026-01-31T14:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.496387 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.496427 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.496436 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.496450 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.496460 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:09Z","lastTransitionTime":"2026-01-31T14:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.599030 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.599084 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.599097 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.599116 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.599128 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:09Z","lastTransitionTime":"2026-01-31T14:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.702514 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.702577 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.702599 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.702632 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.702653 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:09Z","lastTransitionTime":"2026-01-31T14:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.805419 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.805484 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.805501 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.805525 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.805542 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:09Z","lastTransitionTime":"2026-01-31T14:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.907926 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.907991 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.908030 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.908067 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.908091 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:09Z","lastTransitionTime":"2026-01-31T14:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.009986 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.010032 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.010040 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.010054 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.010062 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:10Z","lastTransitionTime":"2026-01-31T14:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.051056 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 12:40:32.428995269 +0000 UTC Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.113140 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.113197 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.113213 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.113237 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.113255 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:10Z","lastTransitionTime":"2026-01-31T14:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.216368 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.216424 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.216434 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.216449 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.216460 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:10Z","lastTransitionTime":"2026-01-31T14:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.319417 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.319482 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.319499 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.319523 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.319540 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:10Z","lastTransitionTime":"2026-01-31T14:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.421826 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.421875 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.421890 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.421936 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.421991 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:10Z","lastTransitionTime":"2026-01-31T14:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.525199 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.525261 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.525282 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.525307 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.525323 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:10Z","lastTransitionTime":"2026-01-31T14:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.629138 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.629198 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.629217 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.629242 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.629258 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:10Z","lastTransitionTime":"2026-01-31T14:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.733043 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.733117 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.733135 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.733168 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.733190 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:10Z","lastTransitionTime":"2026-01-31T14:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.836250 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.836332 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.836350 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.836383 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.836401 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:10Z","lastTransitionTime":"2026-01-31T14:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.939010 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.939076 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.939097 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.939127 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.939150 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:10Z","lastTransitionTime":"2026-01-31T14:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.040922 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.040987 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.040913 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:11 crc kubenswrapper[4763]: E0131 14:56:11.041071 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.041106 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:11 crc kubenswrapper[4763]: E0131 14:56:11.041255 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:11 crc kubenswrapper[4763]: E0131 14:56:11.041338 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:11 crc kubenswrapper[4763]: E0131 14:56:11.041433 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.042887 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.042953 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.042979 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.043009 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.043034 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:11Z","lastTransitionTime":"2026-01-31T14:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.051188 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 06:07:45.575408207 +0000 UTC Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.065569 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.085498 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1c5cd9f9fb249e12361258bc6e5e83c1ac27131f803a258757f251382f9d44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:48Z\\\",\\\"message\\\":\\\"2026-01-31T14:55:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_313e889b-454e-47d4-acdf-ae13f21667c6\\\\n2026-01-31T14:55:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_313e889b-454e-47d4-acdf-ae13f21667c6 to /host/opt/cni/bin/\\\\n2026-01-31T14:55:03Z [verbose] multus-daemon started\\\\n2026-01-31T14:55:03Z [verbose] Readiness Indicator file check\\\\n2026-01-31T14:55:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.104904 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26pm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84302428-88e1-47ba-84cc-7d12472f9aa2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26pm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.118775 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a87dec4-20df-4b46-878a-2fd4e60feedd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd4c38511fece2af6df3bb93ecff7c793bbf4320c7b78e9996fa88a8775d2752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b969d8a316d58e6e57d70e05ba1213b54e8ce8ddb87cbdc9f387758d2d63ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b969d8a316d58e6e57d70e05ba1213b54e8ce8ddb87cbdc9f387758d2d63ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.135985 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.146742 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.146817 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.146839 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.146867 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.146890 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:11Z","lastTransitionTime":"2026-01-31T14:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.164100 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.185084 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07280471-d907-4c1f-a38f-9337ecb04b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a68a90c84c384016b30e31a0dae292a36a85338731f096181443deec0a036e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1aa5110b30c48bccc0d22e7e40da48a9f0709ac2a79ffc9afce33e6113b069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8lmbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.209792 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d9a3481eb12d0b1658e4141e9fa237e24ea0a712155a96e32eb577f03cd862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1d9a3481eb12d0b1658e4141e9fa237e24ea0a712155a96e32eb577f03cd862\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"ps://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z]\\\\nI0131 14:55:57.965475 6780 lb_config.go:1031] Cluster endpoints for openshift-marketplace/redhat-marketplace for network=default are: map[]\\\\nI0131 14:55:57.965601 6780 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.231570 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dac69ab-f0fe-47f2-b03a-a78a0ede4fdf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a88a2346ee1adf65c2806d13514d8c012ea043785bf808123e8639f67f956f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98b72a7d666db4ba071d18272e07a6c063ed4a68c874d76621958647a9ac43fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://841434ac2aa4cdb9dc1a36bb53d7b10fa7b1b70602a81d0a2aef23bd1ededc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.249913 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.249998 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.250019 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.250045 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.250063 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:11Z","lastTransitionTime":"2026-01-31T14:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.258178 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.276082 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.293769 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.310853 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.326551 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.344952 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.353055 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.353131 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.353150 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.353176 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.353194 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:11Z","lastTransitionTime":"2026-01-31T14:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.364224 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.383521 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.403092 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.418686 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.455899 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.456176 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.456324 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.456468 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.456601 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:11Z","lastTransitionTime":"2026-01-31T14:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.559687 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.559803 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.559824 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.559851 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.559871 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:11Z","lastTransitionTime":"2026-01-31T14:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.662087 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.662130 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.662141 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.662158 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.662170 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:11Z","lastTransitionTime":"2026-01-31T14:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.764673 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.764745 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.764758 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.764775 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.764788 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:11Z","lastTransitionTime":"2026-01-31T14:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.867141 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.867467 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.867586 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.867684 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.867800 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:11Z","lastTransitionTime":"2026-01-31T14:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.970928 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.970996 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.971013 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.971037 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.971055 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:11Z","lastTransitionTime":"2026-01-31T14:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.051481 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 14:13:55.881637622 +0000 UTC Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.073751 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.074503 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.074612 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.074754 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.074869 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:12Z","lastTransitionTime":"2026-01-31T14:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.178099 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.178148 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.178166 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.178188 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.178203 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:12Z","lastTransitionTime":"2026-01-31T14:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.281763 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.281872 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.281895 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.281919 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.281937 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:12Z","lastTransitionTime":"2026-01-31T14:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.384680 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.384970 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.385085 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.385168 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.385254 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:12Z","lastTransitionTime":"2026-01-31T14:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.488315 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.488659 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.488772 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.488887 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.488972 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:12Z","lastTransitionTime":"2026-01-31T14:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.591602 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.591882 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.591941 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.591976 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.592001 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:12Z","lastTransitionTime":"2026-01-31T14:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.694993 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.695086 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.695103 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.695126 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.695144 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:12Z","lastTransitionTime":"2026-01-31T14:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.798088 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.798116 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.798124 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.798137 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.798145 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:12Z","lastTransitionTime":"2026-01-31T14:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.900680 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.900767 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.900783 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.900802 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.900818 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:12Z","lastTransitionTime":"2026-01-31T14:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.003533 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.003577 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.003591 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.003608 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.003621 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:13Z","lastTransitionTime":"2026-01-31T14:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.041006 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.041095 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:13 crc kubenswrapper[4763]: E0131 14:56:13.041143 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.041102 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.041285 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:13 crc kubenswrapper[4763]: E0131 14:56:13.041348 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:13 crc kubenswrapper[4763]: E0131 14:56:13.041528 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:13 crc kubenswrapper[4763]: E0131 14:56:13.041747 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.043399 4763 scope.go:117] "RemoveContainer" containerID="c1d9a3481eb12d0b1658e4141e9fa237e24ea0a712155a96e32eb577f03cd862" Jan 31 14:56:13 crc kubenswrapper[4763]: E0131 14:56:13.043783 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.052202 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 12:12:51.396572971 +0000 UTC Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.105854 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.105910 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.105928 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.105950 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.105975 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:13Z","lastTransitionTime":"2026-01-31T14:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.208963 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.209016 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.209031 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.209049 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.209061 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:13Z","lastTransitionTime":"2026-01-31T14:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.311492 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.311563 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.311585 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.311612 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.311628 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:13Z","lastTransitionTime":"2026-01-31T14:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.414998 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.415049 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.415063 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.415289 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.415303 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:13Z","lastTransitionTime":"2026-01-31T14:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.518686 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.518800 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.518817 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.518843 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.518860 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:13Z","lastTransitionTime":"2026-01-31T14:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.623051 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.623121 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.623139 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.623163 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.623181 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:13Z","lastTransitionTime":"2026-01-31T14:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.726554 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.726631 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.726651 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.726677 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.726725 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:13Z","lastTransitionTime":"2026-01-31T14:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.829470 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.829529 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.829542 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.829563 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.829577 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:13Z","lastTransitionTime":"2026-01-31T14:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.932749 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.932788 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.932797 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.932815 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.932829 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:13Z","lastTransitionTime":"2026-01-31T14:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.036737 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.036804 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.036822 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.036849 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.036939 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:14Z","lastTransitionTime":"2026-01-31T14:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.052521 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 13:09:24.203591128 +0000 UTC Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.141271 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.141353 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.141376 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.141408 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.141514 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:14Z","lastTransitionTime":"2026-01-31T14:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.244363 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.244405 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.244418 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.244436 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.244449 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:14Z","lastTransitionTime":"2026-01-31T14:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.346595 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.346645 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.346661 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.346681 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.346715 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:14Z","lastTransitionTime":"2026-01-31T14:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.449802 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.449865 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.449884 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.449908 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.449928 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:14Z","lastTransitionTime":"2026-01-31T14:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.552242 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.552317 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.552329 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.552353 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.552369 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:14Z","lastTransitionTime":"2026-01-31T14:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.654862 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.654907 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.654926 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.654959 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.654974 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:14Z","lastTransitionTime":"2026-01-31T14:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.757849 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.757882 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.757892 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.757907 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.757918 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:14Z","lastTransitionTime":"2026-01-31T14:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.859671 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.859716 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.859724 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.859737 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.859745 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:14Z","lastTransitionTime":"2026-01-31T14:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.962864 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.962926 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.962947 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.962978 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.963000 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:14Z","lastTransitionTime":"2026-01-31T14:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.041499 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:15 crc kubenswrapper[4763]: E0131 14:56:15.041629 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.041815 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:15 crc kubenswrapper[4763]: E0131 14:56:15.041868 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.042021 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.042082 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:15 crc kubenswrapper[4763]: E0131 14:56:15.042538 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:15 crc kubenswrapper[4763]: E0131 14:56:15.042654 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.054736 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 17:40:36.988407749 +0000 UTC Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.065423 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.065496 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.065520 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.065549 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.065571 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:15Z","lastTransitionTime":"2026-01-31T14:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.169182 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.169253 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.169319 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.169352 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.169374 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:15Z","lastTransitionTime":"2026-01-31T14:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.272182 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.272247 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.272264 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.272289 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.272311 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:15Z","lastTransitionTime":"2026-01-31T14:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.374688 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.374759 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.374776 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.374797 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.374813 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:15Z","lastTransitionTime":"2026-01-31T14:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.477331 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.477395 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.477412 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.477437 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.477455 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:15Z","lastTransitionTime":"2026-01-31T14:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.580652 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.580735 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.580754 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.580778 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.580795 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:15Z","lastTransitionTime":"2026-01-31T14:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.683245 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.683496 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.683596 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.683670 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.683783 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:15Z","lastTransitionTime":"2026-01-31T14:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.785752 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.785805 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.785823 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.785846 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.785865 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:15Z","lastTransitionTime":"2026-01-31T14:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.888415 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.888473 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.888494 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.888518 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.888536 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:15Z","lastTransitionTime":"2026-01-31T14:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.991047 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.991104 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.991121 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.991145 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.991165 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:15Z","lastTransitionTime":"2026-01-31T14:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.055559 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 12:35:00.188358831 +0000 UTC Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.094438 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.094818 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.094971 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.095151 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.095286 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:16Z","lastTransitionTime":"2026-01-31T14:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.198662 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.198878 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.198915 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.198945 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.199057 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:16Z","lastTransitionTime":"2026-01-31T14:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.301507 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.301563 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.301591 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.301620 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.301640 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:16Z","lastTransitionTime":"2026-01-31T14:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.404754 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.404824 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.404847 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.404876 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.404899 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:16Z","lastTransitionTime":"2026-01-31T14:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.508965 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.509024 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.509038 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.509061 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.509077 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:16Z","lastTransitionTime":"2026-01-31T14:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.611589 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.611668 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.611689 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.611756 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.611781 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:16Z","lastTransitionTime":"2026-01-31T14:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.714264 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.714339 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.714360 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.714392 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.714475 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:16Z","lastTransitionTime":"2026-01-31T14:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.816756 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.816801 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.816812 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.816829 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.816842 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:16Z","lastTransitionTime":"2026-01-31T14:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.919138 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.919210 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.919227 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.919251 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.919268 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:16Z","lastTransitionTime":"2026-01-31T14:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.022463 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.022534 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.022549 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.022571 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.022587 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:17Z","lastTransitionTime":"2026-01-31T14:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.040912 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:17 crc kubenswrapper[4763]: E0131 14:56:17.041144 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.041542 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:17 crc kubenswrapper[4763]: E0131 14:56:17.041685 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.042011 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:17 crc kubenswrapper[4763]: E0131 14:56:17.042151 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.042509 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:17 crc kubenswrapper[4763]: E0131 14:56:17.042631 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.056279 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 02:45:39.154386198 +0000 UTC Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.125429 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.125476 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.125493 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.125516 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.125533 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:17Z","lastTransitionTime":"2026-01-31T14:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.228012 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.228068 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.228087 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.228109 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.228124 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:17Z","lastTransitionTime":"2026-01-31T14:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.330754 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.330811 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.330833 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.331247 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.331304 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:17Z","lastTransitionTime":"2026-01-31T14:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.434963 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.435031 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.435048 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.435074 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.435091 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:17Z","lastTransitionTime":"2026-01-31T14:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.538485 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.538580 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.538597 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.538654 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.538679 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:17Z","lastTransitionTime":"2026-01-31T14:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.640979 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.641046 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.641069 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.641097 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.641118 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:17Z","lastTransitionTime":"2026-01-31T14:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.743217 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.743287 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.743327 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.743359 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.743382 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:17Z","lastTransitionTime":"2026-01-31T14:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.846811 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.846865 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.846879 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.846902 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.846916 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:17Z","lastTransitionTime":"2026-01-31T14:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.950246 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.950325 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.950351 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.950382 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.950404 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:17Z","lastTransitionTime":"2026-01-31T14:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.054003 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.054057 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.054067 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.054085 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.054097 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:18Z","lastTransitionTime":"2026-01-31T14:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.058186 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 01:15:10.447121742 +0000 UTC Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.158009 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.158138 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.158166 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.158197 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.158222 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:18Z","lastTransitionTime":"2026-01-31T14:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.168877 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.168922 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.168941 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.168960 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.168976 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:18Z","lastTransitionTime":"2026-01-31T14:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.241183 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69"] Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.242083 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.243828 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.244397 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.244869 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.246472 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.294030 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=77.293997689 podStartE2EDuration="1m17.293997689s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:56:18.276921849 +0000 UTC m=+98.031660182" watchObservedRunningTime="2026-01-31 14:56:18.293997689 +0000 UTC m=+98.048736022" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.326933 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/04ba9988-7dc2-41c2-bebf-9f6308ecd013-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wfx69\" (UID: \"04ba9988-7dc2-41c2-bebf-9f6308ecd013\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.326992 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04ba9988-7dc2-41c2-bebf-9f6308ecd013-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wfx69\" (UID: \"04ba9988-7dc2-41c2-bebf-9f6308ecd013\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.327065 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04ba9988-7dc2-41c2-bebf-9f6308ecd013-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wfx69\" (UID: \"04ba9988-7dc2-41c2-bebf-9f6308ecd013\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.327118 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/04ba9988-7dc2-41c2-bebf-9f6308ecd013-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wfx69\" (UID: \"04ba9988-7dc2-41c2-bebf-9f6308ecd013\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.327160 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/04ba9988-7dc2-41c2-bebf-9f6308ecd013-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wfx69\" (UID: \"04ba9988-7dc2-41c2-bebf-9f6308ecd013\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.351953 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podStartSLOduration=78.351925138 podStartE2EDuration="1m18.351925138s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:56:18.35022558 +0000 UTC m=+98.104963913" watchObservedRunningTime="2026-01-31 14:56:18.351925138 +0000 UTC m=+98.106663471" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.352745 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-ghn8r" podStartSLOduration=78.35273186 podStartE2EDuration="1m18.35273186s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:56:18.331438192 +0000 UTC m=+98.086176515" watchObservedRunningTime="2026-01-31 14:56:18.35273186 +0000 UTC m=+98.107470193" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.422004 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=49.421985287 podStartE2EDuration="49.421985287s" podCreationTimestamp="2026-01-31 14:55:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:56:18.421973077 +0000 UTC m=+98.176711420" watchObservedRunningTime="2026-01-31 14:56:18.421985287 +0000 UTC m=+98.176723590" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.428511 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/04ba9988-7dc2-41c2-bebf-9f6308ecd013-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wfx69\" (UID: \"04ba9988-7dc2-41c2-bebf-9f6308ecd013\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.428552 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04ba9988-7dc2-41c2-bebf-9f6308ecd013-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wfx69\" (UID: \"04ba9988-7dc2-41c2-bebf-9f6308ecd013\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.428629 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04ba9988-7dc2-41c2-bebf-9f6308ecd013-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wfx69\" (UID: \"04ba9988-7dc2-41c2-bebf-9f6308ecd013\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.428628 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/04ba9988-7dc2-41c2-bebf-9f6308ecd013-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wfx69\" (UID: \"04ba9988-7dc2-41c2-bebf-9f6308ecd013\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.428664 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/04ba9988-7dc2-41c2-bebf-9f6308ecd013-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wfx69\" (UID: \"04ba9988-7dc2-41c2-bebf-9f6308ecd013\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.428714 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/04ba9988-7dc2-41c2-bebf-9f6308ecd013-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wfx69\" (UID: \"04ba9988-7dc2-41c2-bebf-9f6308ecd013\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.428827 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/04ba9988-7dc2-41c2-bebf-9f6308ecd013-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wfx69\" (UID: \"04ba9988-7dc2-41c2-bebf-9f6308ecd013\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.430487 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/04ba9988-7dc2-41c2-bebf-9f6308ecd013-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wfx69\" (UID: \"04ba9988-7dc2-41c2-bebf-9f6308ecd013\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.441045 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04ba9988-7dc2-41c2-bebf-9f6308ecd013-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wfx69\" (UID: \"04ba9988-7dc2-41c2-bebf-9f6308ecd013\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.457634 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04ba9988-7dc2-41c2-bebf-9f6308ecd013-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wfx69\" (UID: \"04ba9988-7dc2-41c2-bebf-9f6308ecd013\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.521231 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=75.521213037 podStartE2EDuration="1m15.521213037s" podCreationTimestamp="2026-01-31 14:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:56:18.520922998 +0000 UTC m=+98.275661331" watchObservedRunningTime="2026-01-31 14:56:18.521213037 +0000 UTC m=+98.275951340" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.539741 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-qzkhg" podStartSLOduration=78.539719027 podStartE2EDuration="1m18.539719027s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:56:18.538882913 +0000 UTC m=+98.293621236" watchObservedRunningTime="2026-01-31 14:56:18.539719027 +0000 UTC m=+98.294457330" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.565276 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.578622 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=80.578605211 podStartE2EDuration="1m20.578605211s" podCreationTimestamp="2026-01-31 14:54:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:56:18.569236827 +0000 UTC m=+98.323975120" watchObservedRunningTime="2026-01-31 14:56:18.578605211 +0000 UTC m=+98.333343514" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.597224 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-qcb97" podStartSLOduration=79.597203413 podStartE2EDuration="1m19.597203413s" podCreationTimestamp="2026-01-31 14:54:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:56:18.579387182 +0000 UTC m=+98.334125485" watchObservedRunningTime="2026-01-31 14:56:18.597203413 +0000 UTC m=+98.351941716" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.614022 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-npvkf" podStartSLOduration=78.614003606 podStartE2EDuration="1m18.614003606s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:56:18.598812248 +0000 UTC m=+98.353550551" watchObservedRunningTime="2026-01-31 14:56:18.614003606 +0000 UTC m=+98.368741899" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.629950 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" event={"ID":"04ba9988-7dc2-41c2-bebf-9f6308ecd013","Type":"ContainerStarted","Data":"e3defdf134a02e5ab7a7ac2cac05ae140aea438d3445df0ebf977475e0c63d7a"} Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.633320 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" podStartSLOduration=77.633307039 podStartE2EDuration="1m17.633307039s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:56:18.614479409 +0000 UTC m=+98.369217702" watchObservedRunningTime="2026-01-31 14:56:18.633307039 +0000 UTC m=+98.388045332" Jan 31 14:56:19 crc kubenswrapper[4763]: I0131 14:56:19.035063 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs\") pod \"network-metrics-daemon-26pm5\" (UID: \"84302428-88e1-47ba-84cc-7d12472f9aa2\") " pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:19 crc kubenswrapper[4763]: E0131 14:56:19.035292 4763 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:56:19 crc kubenswrapper[4763]: E0131 14:56:19.035392 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs podName:84302428-88e1-47ba-84cc-7d12472f9aa2 nodeName:}" failed. No retries permitted until 2026-01-31 14:57:23.035367922 +0000 UTC m=+162.790106325 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs") pod "network-metrics-daemon-26pm5" (UID: "84302428-88e1-47ba-84cc-7d12472f9aa2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:56:19 crc kubenswrapper[4763]: I0131 14:56:19.040976 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:19 crc kubenswrapper[4763]: I0131 14:56:19.041063 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:19 crc kubenswrapper[4763]: I0131 14:56:19.041079 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:19 crc kubenswrapper[4763]: I0131 14:56:19.041271 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:19 crc kubenswrapper[4763]: E0131 14:56:19.041266 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:19 crc kubenswrapper[4763]: E0131 14:56:19.041447 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:19 crc kubenswrapper[4763]: E0131 14:56:19.041578 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:19 crc kubenswrapper[4763]: E0131 14:56:19.041765 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:19 crc kubenswrapper[4763]: I0131 14:56:19.059435 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 14:18:19.1755932 +0000 UTC Jan 31 14:56:19 crc kubenswrapper[4763]: I0131 14:56:19.059497 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 31 14:56:19 crc kubenswrapper[4763]: I0131 14:56:19.069037 4763 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 31 14:56:19 crc kubenswrapper[4763]: I0131 14:56:19.636744 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" event={"ID":"04ba9988-7dc2-41c2-bebf-9f6308ecd013","Type":"ContainerStarted","Data":"15ac6022b7ad6aeb12b73c83dfd692e9afc1518b8f551b05985fb7ad56c6c7b7"} Jan 31 14:56:19 crc kubenswrapper[4763]: I0131 14:56:19.658310 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=17.658282294 podStartE2EDuration="17.658282294s" podCreationTimestamp="2026-01-31 14:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:56:18.633458323 +0000 UTC m=+98.388196616" watchObservedRunningTime="2026-01-31 14:56:19.658282294 +0000 UTC m=+99.413020627" Jan 31 14:56:19 crc kubenswrapper[4763]: I0131 14:56:19.659521 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" podStartSLOduration=79.659513149 podStartE2EDuration="1m19.659513149s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:56:19.658064758 +0000 UTC m=+99.412803091" watchObservedRunningTime="2026-01-31 14:56:19.659513149 +0000 UTC m=+99.414251482" Jan 31 14:56:21 crc kubenswrapper[4763]: I0131 14:56:21.041381 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:21 crc kubenswrapper[4763]: I0131 14:56:21.041434 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:21 crc kubenswrapper[4763]: I0131 14:56:21.041502 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:21 crc kubenswrapper[4763]: E0131 14:56:21.043456 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:21 crc kubenswrapper[4763]: I0131 14:56:21.043518 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:21 crc kubenswrapper[4763]: E0131 14:56:21.043690 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:21 crc kubenswrapper[4763]: E0131 14:56:21.043935 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:21 crc kubenswrapper[4763]: E0131 14:56:21.044082 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:23 crc kubenswrapper[4763]: I0131 14:56:23.041749 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:23 crc kubenswrapper[4763]: I0131 14:56:23.041765 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:23 crc kubenswrapper[4763]: E0131 14:56:23.041944 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:23 crc kubenswrapper[4763]: I0131 14:56:23.041979 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:23 crc kubenswrapper[4763]: E0131 14:56:23.042102 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:23 crc kubenswrapper[4763]: E0131 14:56:23.042183 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:23 crc kubenswrapper[4763]: I0131 14:56:23.042628 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:23 crc kubenswrapper[4763]: E0131 14:56:23.042756 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:24 crc kubenswrapper[4763]: I0131 14:56:24.042160 4763 scope.go:117] "RemoveContainer" containerID="c1d9a3481eb12d0b1658e4141e9fa237e24ea0a712155a96e32eb577f03cd862" Jan 31 14:56:24 crc kubenswrapper[4763]: E0131 14:56:24.042418 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" Jan 31 14:56:25 crc kubenswrapper[4763]: I0131 14:56:25.041447 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:25 crc kubenswrapper[4763]: I0131 14:56:25.041524 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:25 crc kubenswrapper[4763]: I0131 14:56:25.041558 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:25 crc kubenswrapper[4763]: E0131 14:56:25.041622 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:25 crc kubenswrapper[4763]: I0131 14:56:25.041732 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:25 crc kubenswrapper[4763]: E0131 14:56:25.041738 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:25 crc kubenswrapper[4763]: E0131 14:56:25.041834 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:25 crc kubenswrapper[4763]: E0131 14:56:25.041908 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:27 crc kubenswrapper[4763]: I0131 14:56:27.040798 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:27 crc kubenswrapper[4763]: I0131 14:56:27.040886 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:27 crc kubenswrapper[4763]: E0131 14:56:27.040915 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:27 crc kubenswrapper[4763]: E0131 14:56:27.041058 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:27 crc kubenswrapper[4763]: I0131 14:56:27.041126 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:27 crc kubenswrapper[4763]: E0131 14:56:27.041298 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:27 crc kubenswrapper[4763]: I0131 14:56:27.042220 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:27 crc kubenswrapper[4763]: E0131 14:56:27.042520 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:29 crc kubenswrapper[4763]: I0131 14:56:29.041073 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:29 crc kubenswrapper[4763]: I0131 14:56:29.041146 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:29 crc kubenswrapper[4763]: E0131 14:56:29.041346 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:29 crc kubenswrapper[4763]: I0131 14:56:29.041531 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:29 crc kubenswrapper[4763]: E0131 14:56:29.041783 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:29 crc kubenswrapper[4763]: E0131 14:56:29.041975 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:29 crc kubenswrapper[4763]: I0131 14:56:29.042006 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:29 crc kubenswrapper[4763]: E0131 14:56:29.042192 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:31 crc kubenswrapper[4763]: I0131 14:56:31.041581 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:31 crc kubenswrapper[4763]: I0131 14:56:31.041719 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:31 crc kubenswrapper[4763]: I0131 14:56:31.041725 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:31 crc kubenswrapper[4763]: I0131 14:56:31.041909 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:31 crc kubenswrapper[4763]: E0131 14:56:31.043125 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:31 crc kubenswrapper[4763]: E0131 14:56:31.043341 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:31 crc kubenswrapper[4763]: E0131 14:56:31.043480 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:31 crc kubenswrapper[4763]: E0131 14:56:31.043586 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:33 crc kubenswrapper[4763]: I0131 14:56:33.041456 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:33 crc kubenswrapper[4763]: I0131 14:56:33.041526 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:33 crc kubenswrapper[4763]: I0131 14:56:33.041599 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:33 crc kubenswrapper[4763]: E0131 14:56:33.041600 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:33 crc kubenswrapper[4763]: I0131 14:56:33.041671 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:33 crc kubenswrapper[4763]: E0131 14:56:33.041788 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:33 crc kubenswrapper[4763]: E0131 14:56:33.041972 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:33 crc kubenswrapper[4763]: E0131 14:56:33.042113 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:34 crc kubenswrapper[4763]: I0131 14:56:34.695721 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qzkhg_2335d04f-10b2-4cf8-aae6-236650539c74/kube-multus/1.log" Jan 31 14:56:34 crc kubenswrapper[4763]: I0131 14:56:34.696371 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qzkhg_2335d04f-10b2-4cf8-aae6-236650539c74/kube-multus/0.log" Jan 31 14:56:34 crc kubenswrapper[4763]: I0131 14:56:34.696413 4763 generic.go:334] "Generic (PLEG): container finished" podID="2335d04f-10b2-4cf8-aae6-236650539c74" containerID="ab1c5cd9f9fb249e12361258bc6e5e83c1ac27131f803a258757f251382f9d44" exitCode=1 Jan 31 14:56:34 crc kubenswrapper[4763]: I0131 14:56:34.696447 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qzkhg" event={"ID":"2335d04f-10b2-4cf8-aae6-236650539c74","Type":"ContainerDied","Data":"ab1c5cd9f9fb249e12361258bc6e5e83c1ac27131f803a258757f251382f9d44"} Jan 31 14:56:34 crc kubenswrapper[4763]: I0131 14:56:34.696488 4763 scope.go:117] "RemoveContainer" containerID="e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947" Jan 31 14:56:34 crc kubenswrapper[4763]: I0131 14:56:34.696946 4763 scope.go:117] "RemoveContainer" containerID="ab1c5cd9f9fb249e12361258bc6e5e83c1ac27131f803a258757f251382f9d44" Jan 31 14:56:34 crc kubenswrapper[4763]: E0131 14:56:34.697122 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-qzkhg_openshift-multus(2335d04f-10b2-4cf8-aae6-236650539c74)\"" pod="openshift-multus/multus-qzkhg" podUID="2335d04f-10b2-4cf8-aae6-236650539c74" Jan 31 14:56:35 crc kubenswrapper[4763]: I0131 14:56:35.041814 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:35 crc kubenswrapper[4763]: I0131 14:56:35.041900 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:35 crc kubenswrapper[4763]: E0131 14:56:35.041972 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:35 crc kubenswrapper[4763]: I0131 14:56:35.042066 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:35 crc kubenswrapper[4763]: E0131 14:56:35.042261 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:35 crc kubenswrapper[4763]: I0131 14:56:35.042307 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:35 crc kubenswrapper[4763]: E0131 14:56:35.042375 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:35 crc kubenswrapper[4763]: E0131 14:56:35.042446 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:35 crc kubenswrapper[4763]: I0131 14:56:35.701554 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qzkhg_2335d04f-10b2-4cf8-aae6-236650539c74/kube-multus/1.log" Jan 31 14:56:37 crc kubenswrapper[4763]: I0131 14:56:37.041364 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:37 crc kubenswrapper[4763]: I0131 14:56:37.041549 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:37 crc kubenswrapper[4763]: E0131 14:56:37.041604 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:37 crc kubenswrapper[4763]: I0131 14:56:37.041656 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:37 crc kubenswrapper[4763]: I0131 14:56:37.041798 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:37 crc kubenswrapper[4763]: E0131 14:56:37.041841 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:37 crc kubenswrapper[4763]: E0131 14:56:37.042071 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:37 crc kubenswrapper[4763]: E0131 14:56:37.042246 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:38 crc kubenswrapper[4763]: I0131 14:56:38.043075 4763 scope.go:117] "RemoveContainer" containerID="c1d9a3481eb12d0b1658e4141e9fa237e24ea0a712155a96e32eb577f03cd862" Jan 31 14:56:38 crc kubenswrapper[4763]: I0131 14:56:38.713907 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovnkube-controller/3.log" Jan 31 14:56:38 crc kubenswrapper[4763]: I0131 14:56:38.716823 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerStarted","Data":"0dbc532ebe28b0235c423161c9ad89a344c1f544a333aeb218dae16072e95df9"} Jan 31 14:56:38 crc kubenswrapper[4763]: I0131 14:56:38.717252 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:56:38 crc kubenswrapper[4763]: I0131 14:56:38.751877 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" podStartSLOduration=98.751846873 podStartE2EDuration="1m38.751846873s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:56:38.748713984 +0000 UTC m=+118.503452297" watchObservedRunningTime="2026-01-31 14:56:38.751846873 +0000 UTC m=+118.506585216" Jan 31 14:56:38 crc kubenswrapper[4763]: I0131 14:56:38.839159 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-26pm5"] Jan 31 14:56:38 crc kubenswrapper[4763]: I0131 14:56:38.839318 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:38 crc kubenswrapper[4763]: E0131 14:56:38.839447 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:39 crc kubenswrapper[4763]: I0131 14:56:39.041860 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:39 crc kubenswrapper[4763]: E0131 14:56:39.041985 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:39 crc kubenswrapper[4763]: I0131 14:56:39.042260 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:39 crc kubenswrapper[4763]: E0131 14:56:39.042322 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:39 crc kubenswrapper[4763]: I0131 14:56:39.042759 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:39 crc kubenswrapper[4763]: E0131 14:56:39.044515 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:41 crc kubenswrapper[4763]: I0131 14:56:41.041342 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:41 crc kubenswrapper[4763]: I0131 14:56:41.041494 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:41 crc kubenswrapper[4763]: I0131 14:56:41.043676 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:41 crc kubenswrapper[4763]: E0131 14:56:41.043664 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:41 crc kubenswrapper[4763]: I0131 14:56:41.043832 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:41 crc kubenswrapper[4763]: E0131 14:56:41.043955 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:41 crc kubenswrapper[4763]: E0131 14:56:41.044022 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:41 crc kubenswrapper[4763]: E0131 14:56:41.044167 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:41 crc kubenswrapper[4763]: E0131 14:56:41.079040 4763 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 31 14:56:41 crc kubenswrapper[4763]: E0131 14:56:41.158035 4763 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 14:56:43 crc kubenswrapper[4763]: I0131 14:56:43.040964 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:43 crc kubenswrapper[4763]: I0131 14:56:43.041039 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:43 crc kubenswrapper[4763]: E0131 14:56:43.041526 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:43 crc kubenswrapper[4763]: I0131 14:56:43.041123 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:43 crc kubenswrapper[4763]: E0131 14:56:43.041650 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:43 crc kubenswrapper[4763]: I0131 14:56:43.041050 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:43 crc kubenswrapper[4763]: E0131 14:56:43.041819 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:43 crc kubenswrapper[4763]: E0131 14:56:43.041997 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:45 crc kubenswrapper[4763]: I0131 14:56:45.041078 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:45 crc kubenswrapper[4763]: I0131 14:56:45.041139 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:45 crc kubenswrapper[4763]: E0131 14:56:45.041288 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:45 crc kubenswrapper[4763]: I0131 14:56:45.041340 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:45 crc kubenswrapper[4763]: I0131 14:56:45.041346 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:45 crc kubenswrapper[4763]: E0131 14:56:45.041479 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:45 crc kubenswrapper[4763]: E0131 14:56:45.041612 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:45 crc kubenswrapper[4763]: E0131 14:56:45.041658 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:46 crc kubenswrapper[4763]: E0131 14:56:46.159136 4763 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 14:56:47 crc kubenswrapper[4763]: I0131 14:56:47.041150 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:47 crc kubenswrapper[4763]: I0131 14:56:47.041348 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:47 crc kubenswrapper[4763]: I0131 14:56:47.041272 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:47 crc kubenswrapper[4763]: I0131 14:56:47.041272 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:47 crc kubenswrapper[4763]: E0131 14:56:47.041446 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:47 crc kubenswrapper[4763]: E0131 14:56:47.041613 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:47 crc kubenswrapper[4763]: E0131 14:56:47.041826 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:47 crc kubenswrapper[4763]: E0131 14:56:47.041967 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:49 crc kubenswrapper[4763]: I0131 14:56:49.041083 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:49 crc kubenswrapper[4763]: I0131 14:56:49.041182 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:49 crc kubenswrapper[4763]: E0131 14:56:49.041308 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:49 crc kubenswrapper[4763]: I0131 14:56:49.041128 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:49 crc kubenswrapper[4763]: I0131 14:56:49.041615 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:49 crc kubenswrapper[4763]: E0131 14:56:49.041811 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:49 crc kubenswrapper[4763]: E0131 14:56:49.041926 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:49 crc kubenswrapper[4763]: E0131 14:56:49.041992 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:49 crc kubenswrapper[4763]: I0131 14:56:49.042011 4763 scope.go:117] "RemoveContainer" containerID="ab1c5cd9f9fb249e12361258bc6e5e83c1ac27131f803a258757f251382f9d44" Jan 31 14:56:49 crc kubenswrapper[4763]: I0131 14:56:49.762021 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qzkhg_2335d04f-10b2-4cf8-aae6-236650539c74/kube-multus/1.log" Jan 31 14:56:49 crc kubenswrapper[4763]: I0131 14:56:49.762328 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qzkhg" event={"ID":"2335d04f-10b2-4cf8-aae6-236650539c74","Type":"ContainerStarted","Data":"2769dbbb45eb5d98d9a4121f2a136f097f2dd1032e3c1238029f201a1307a3a6"} Jan 31 14:56:51 crc kubenswrapper[4763]: I0131 14:56:51.040904 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:51 crc kubenswrapper[4763]: I0131 14:56:51.040946 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:51 crc kubenswrapper[4763]: I0131 14:56:51.040994 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:51 crc kubenswrapper[4763]: I0131 14:56:51.041050 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:51 crc kubenswrapper[4763]: E0131 14:56:51.042251 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:51 crc kubenswrapper[4763]: E0131 14:56:51.042433 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:51 crc kubenswrapper[4763]: E0131 14:56:51.042613 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:51 crc kubenswrapper[4763]: E0131 14:56:51.042747 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:53 crc kubenswrapper[4763]: I0131 14:56:53.041397 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:53 crc kubenswrapper[4763]: I0131 14:56:53.041501 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:53 crc kubenswrapper[4763]: I0131 14:56:53.041538 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:53 crc kubenswrapper[4763]: I0131 14:56:53.041741 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:53 crc kubenswrapper[4763]: I0131 14:56:53.044327 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 31 14:56:53 crc kubenswrapper[4763]: I0131 14:56:53.044680 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 31 14:56:53 crc kubenswrapper[4763]: I0131 14:56:53.045110 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 31 14:56:53 crc kubenswrapper[4763]: I0131 14:56:53.045186 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 31 14:56:53 crc kubenswrapper[4763]: I0131 14:56:53.045317 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 31 14:56:53 crc kubenswrapper[4763]: I0131 14:56:53.045530 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 31 14:56:57 crc kubenswrapper[4763]: I0131 14:56:57.013907 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.114088 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.157928 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bpxtg"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.158443 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.160850 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.161429 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.165097 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.165372 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.166284 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.166773 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.187197 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.188854 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-9lvgt"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.189542 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.194763 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.195256 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bwc2g"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.195275 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.195379 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.195483 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.195604 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.195660 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-bwc2g" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.195790 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.195858 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.195909 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.195991 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.197604 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.200752 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tv9s8"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.201674 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.200766 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.200874 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.200935 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.202223 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.202165 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.200970 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.200989 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.201025 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.205658 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-zd45q"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.206159 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zd45q" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.207601 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.208318 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.209185 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b67dedfb-accc-467d-a3bb-508eab4f88c8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bpxtg\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.209219 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b67dedfb-accc-467d-a3bb-508eab4f88c8-config\") pod \"controller-manager-879f6c89f-bpxtg\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.209236 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5csj\" (UniqueName: \"kubernetes.io/projected/b67dedfb-accc-467d-a3bb-508eab4f88c8-kube-api-access-j5csj\") pod \"controller-manager-879f6c89f-bpxtg\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.209260 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b67dedfb-accc-467d-a3bb-508eab4f88c8-client-ca\") pod \"controller-manager-879f6c89f-bpxtg\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.209276 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b67dedfb-accc-467d-a3bb-508eab4f88c8-serving-cert\") pod \"controller-manager-879f6c89f-bpxtg\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.209343 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-75x7z"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.209658 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-75x7z" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.213840 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mjbd9"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.214170 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2bx5m"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.214436 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8pcvn"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.214790 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.215272 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-mjbd9" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.215521 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2bx5m" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.220894 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-jj6qz"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.221396 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wjjvp"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.221687 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-bh727"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.230679 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-jj6qz" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.235902 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.260117 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dzr7c"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.260509 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-87f9c"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.261020 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.261068 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-bh727" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.261297 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.261642 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qpplg"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.262343 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpplg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.262576 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.263093 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.263314 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8gxz5"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.263782 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8gxz5" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.265270 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.265409 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2jtkm"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.265847 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2jtkm" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.266174 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nzj54"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.266458 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.269754 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2zm8"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.270262 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.270826 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.272990 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.273056 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.273238 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.273312 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2zm8" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.279972 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zznr9"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.280770 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.281086 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.281327 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zznr9" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.282662 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-852vg"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.283182 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-852vg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.286746 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bpxtg"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.288552 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-flcgf"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.289831 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.290867 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l8wxr"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.291750 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l8wxr" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.292993 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-g5s5p"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.293611 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5s5p" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.296712 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.297320 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.297942 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.298246 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.301277 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dncp9"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.303004 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.303521 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.303668 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dncp9" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.304238 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.304720 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.307852 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z8wss"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.308607 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.308959 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gxcjc"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.309496 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-gxcjc" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.309717 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z8wss" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.309845 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312185 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65c4b2d3-8915-480e-abf5-3b3e0184f778-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2jtkm\" (UID: \"65c4b2d3-8915-480e-abf5-3b3e0184f778\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2jtkm" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312232 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0cddc243-3a83-4398-87a9-7a111581bec5-console-serving-cert\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312272 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbbl5\" (UniqueName: \"kubernetes.io/projected/bb78095b-d026-498f-9616-d8365161f809-kube-api-access-tbbl5\") pod \"migrator-59844c95c7-qpplg\" (UID: \"bb78095b-d026-498f-9616-d8365161f809\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpplg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312298 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4c48bb3f-235a-4dcd-ba1a-62f85f8946ac-trusted-ca\") pod \"ingress-operator-5b745b69d9-fkt75\" (UID: \"4c48bb3f-235a-4dcd-ba1a-62f85f8946ac\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312318 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqfbv\" (UniqueName: \"kubernetes.io/projected/65c4b2d3-8915-480e-abf5-3b3e0184f778-kube-api-access-qqfbv\") pod \"openshift-controller-manager-operator-756b6f6bc6-2jtkm\" (UID: \"65c4b2d3-8915-480e-abf5-3b3e0184f778\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2jtkm" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312353 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7b2p\" (UniqueName: \"kubernetes.io/projected/954567bc-27c1-40c6-8fa3-8f653f90c199-kube-api-access-p7b2p\") pod \"authentication-operator-69f744f599-wjjvp\" (UID: \"954567bc-27c1-40c6-8fa3-8f653f90c199\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312373 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/96ce635a-c905-4317-9f6d-64e1437d95c2-auth-proxy-config\") pod \"machine-approver-56656f9798-zd45q\" (UID: \"96ce635a-c905-4317-9f6d-64e1437d95c2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zd45q" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312392 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmpdd\" (UniqueName: \"kubernetes.io/projected/ed007985-f681-4a45-a71a-ba27798fa94d-kube-api-access-cmpdd\") pod \"cluster-samples-operator-665b6dd947-75x7z\" (UID: \"ed007985-f681-4a45-a71a-ba27798fa94d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-75x7z" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312411 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flhh8\" (UniqueName: \"kubernetes.io/projected/7540b5d1-cac8-4c3d-88f1-cd961bf8bd47-kube-api-access-flhh8\") pod \"dns-operator-744455d44c-jj6qz\" (UID: \"7540b5d1-cac8-4c3d-88f1-cd961bf8bd47\") " pod="openshift-dns-operator/dns-operator-744455d44c-jj6qz" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312435 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b67dedfb-accc-467d-a3bb-508eab4f88c8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bpxtg\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312458 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csx57\" (UniqueName: \"kubernetes.io/projected/156e6a74-f3a0-4ae0-8233-36da8946b7d6-kube-api-access-csx57\") pod \"machine-config-controller-84d6567774-zznr9\" (UID: \"156e6a74-f3a0-4ae0-8233-36da8946b7d6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zznr9" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312482 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft9n7\" (UniqueName: \"kubernetes.io/projected/96ce635a-c905-4317-9f6d-64e1437d95c2-kube-api-access-ft9n7\") pod \"machine-approver-56656f9798-zd45q\" (UID: \"96ce635a-c905-4317-9f6d-64e1437d95c2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zd45q" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312508 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0cddc243-3a83-4398-87a9-7a111581bec5-console-config\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312530 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5d9ac26c-eb66-4772-b7ee-a6b646092c4b-images\") pod \"machine-api-operator-5694c8668f-bwc2g\" (UID: \"5d9ac26c-eb66-4772-b7ee-a6b646092c4b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bwc2g" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312552 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b67dedfb-accc-467d-a3bb-508eab4f88c8-config\") pod \"controller-manager-879f6c89f-bpxtg\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312573 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0cddc243-3a83-4398-87a9-7a111581bec5-console-oauth-config\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312610 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/954567bc-27c1-40c6-8fa3-8f653f90c199-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wjjvp\" (UID: \"954567bc-27c1-40c6-8fa3-8f653f90c199\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312630 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/954567bc-27c1-40c6-8fa3-8f653f90c199-service-ca-bundle\") pod \"authentication-operator-69f744f599-wjjvp\" (UID: \"954567bc-27c1-40c6-8fa3-8f653f90c199\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312651 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s8f5\" (UniqueName: \"kubernetes.io/projected/4c48bb3f-235a-4dcd-ba1a-62f85f8946ac-kube-api-access-8s8f5\") pod \"ingress-operator-5b745b69d9-fkt75\" (UID: \"4c48bb3f-235a-4dcd-ba1a-62f85f8946ac\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312672 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed007985-f681-4a45-a71a-ba27798fa94d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-75x7z\" (UID: \"ed007985-f681-4a45-a71a-ba27798fa94d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-75x7z" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312720 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/156e6a74-f3a0-4ae0-8233-36da8946b7d6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zznr9\" (UID: \"156e6a74-f3a0-4ae0-8233-36da8946b7d6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zznr9" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312803 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-9lvgt"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312855 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j7kx4"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.313577 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j7kx4" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.314091 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.314910 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.315095 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.315301 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.315439 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.316417 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.316580 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.316611 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.316784 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.316837 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.316920 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.316968 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.317036 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.317093 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.317136 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.317404 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.317563 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.319563 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-jj6qz"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.320057 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.320208 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.320310 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.320404 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.320507 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.320601 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.320717 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.320815 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.320903 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b67dedfb-accc-467d-a3bb-508eab4f88c8-config\") pod \"controller-manager-879f6c89f-bpxtg\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.321023 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.321465 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.321658 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.321780 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.321871 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.321976 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.322086 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.322148 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b67dedfb-accc-467d-a3bb-508eab4f88c8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bpxtg\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.322185 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.322251 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.322260 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-75x7z"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.322923 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-w9cb6"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.341903 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.342155 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/96ce635a-c905-4317-9f6d-64e1437d95c2-machine-approver-tls\") pod \"machine-approver-56656f9798-zd45q\" (UID: \"96ce635a-c905-4317-9f6d-64e1437d95c2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zd45q" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.342193 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/5d9ac26c-eb66-4772-b7ee-a6b646092c4b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bwc2g\" (UID: \"5d9ac26c-eb66-4772-b7ee-a6b646092c4b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bwc2g" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.342222 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4c48bb3f-235a-4dcd-ba1a-62f85f8946ac-metrics-tls\") pod \"ingress-operator-5b745b69d9-fkt75\" (UID: \"4c48bb3f-235a-4dcd-ba1a-62f85f8946ac\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.342242 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4c48bb3f-235a-4dcd-ba1a-62f85f8946ac-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fkt75\" (UID: \"4c48bb3f-235a-4dcd-ba1a-62f85f8946ac\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.342268 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/156e6a74-f3a0-4ae0-8233-36da8946b7d6-proxy-tls\") pod \"machine-config-controller-84d6567774-zznr9\" (UID: \"156e6a74-f3a0-4ae0-8233-36da8946b7d6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zznr9" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.342289 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65c4b2d3-8915-480e-abf5-3b3e0184f778-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2jtkm\" (UID: \"65c4b2d3-8915-480e-abf5-3b3e0184f778\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2jtkm" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.342307 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96ce635a-c905-4317-9f6d-64e1437d95c2-config\") pod \"machine-approver-56656f9798-zd45q\" (UID: \"96ce635a-c905-4317-9f6d-64e1437d95c2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zd45q" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.342331 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7540b5d1-cac8-4c3d-88f1-cd961bf8bd47-metrics-tls\") pod \"dns-operator-744455d44c-jj6qz\" (UID: \"7540b5d1-cac8-4c3d-88f1-cd961bf8bd47\") " pod="openshift-dns-operator/dns-operator-744455d44c-jj6qz" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.342352 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0cddc243-3a83-4398-87a9-7a111581bec5-oauth-serving-cert\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.342371 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cddc243-3a83-4398-87a9-7a111581bec5-trusted-ca-bundle\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.342389 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89hlp\" (UniqueName: \"kubernetes.io/projected/5d9ac26c-eb66-4772-b7ee-a6b646092c4b-kube-api-access-89hlp\") pod \"machine-api-operator-5694c8668f-bwc2g\" (UID: \"5d9ac26c-eb66-4772-b7ee-a6b646092c4b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bwc2g" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.342430 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/954567bc-27c1-40c6-8fa3-8f653f90c199-config\") pod \"authentication-operator-69f744f599-wjjvp\" (UID: \"954567bc-27c1-40c6-8fa3-8f653f90c199\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.342462 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0cddc243-3a83-4398-87a9-7a111581bec5-service-ca\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.342573 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5csj\" (UniqueName: \"kubernetes.io/projected/b67dedfb-accc-467d-a3bb-508eab4f88c8-kube-api-access-j5csj\") pod \"controller-manager-879f6c89f-bpxtg\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.342773 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-w9cb6" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.343100 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9krq7\" (UniqueName: \"kubernetes.io/projected/db0aea6c-f6f8-4548-905b-22d810b334d4-kube-api-access-9krq7\") pod \"downloads-7954f5f757-bh727\" (UID: \"db0aea6c-f6f8-4548-905b-22d810b334d4\") " pod="openshift-console/downloads-7954f5f757-bh727" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.343131 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b67dedfb-accc-467d-a3bb-508eab4f88c8-client-ca\") pod \"controller-manager-879f6c89f-bpxtg\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.343167 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/954567bc-27c1-40c6-8fa3-8f653f90c199-serving-cert\") pod \"authentication-operator-69f744f599-wjjvp\" (UID: \"954567bc-27c1-40c6-8fa3-8f653f90c199\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.343188 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b67dedfb-accc-467d-a3bb-508eab4f88c8-serving-cert\") pod \"controller-manager-879f6c89f-bpxtg\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.343209 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7plc\" (UniqueName: \"kubernetes.io/projected/0cddc243-3a83-4398-87a9-7a111581bec5-kube-api-access-v7plc\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.343228 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d9ac26c-eb66-4772-b7ee-a6b646092c4b-config\") pod \"machine-api-operator-5694c8668f-bwc2g\" (UID: \"5d9ac26c-eb66-4772-b7ee-a6b646092c4b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bwc2g" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.343434 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bwc2g"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.344250 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b67dedfb-accc-467d-a3bb-508eab4f88c8-client-ca\") pod \"controller-manager-879f6c89f-bpxtg\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.346263 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5kfwr"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.352071 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b67dedfb-accc-467d-a3bb-508eab4f88c8-serving-cert\") pod \"controller-manager-879f6c89f-bpxtg\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.360430 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-bh727"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.360523 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.360802 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.361088 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.361200 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.361204 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.361479 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.361605 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.362086 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.362168 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.362402 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.362567 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wjjvp"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.362740 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.363353 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.363439 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.363468 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.363514 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.363532 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.363580 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.363614 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.363649 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.363685 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.364905 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.365076 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.365249 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.365395 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.366284 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.366373 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.366560 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.369505 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.369736 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.369857 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.373144 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.373334 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.373477 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.373580 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.374238 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.374428 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8pcvn"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.374798 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.376149 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.376956 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.378186 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.382460 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.385213 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.387904 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.388927 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.389440 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.389586 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.390651 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.392549 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.393970 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2zm8"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.395984 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8gxz5"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.397095 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.397245 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2jtkm"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.398287 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.400802 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nzj54"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.400841 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-flcgf"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.400851 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zznr9"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.402183 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dzr7c"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.403262 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.403820 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.424165 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.424273 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tv9s8"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.427417 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.428836 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-l8kn4"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.429615 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-l8kn4" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.429732 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.431851 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.433947 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qpplg"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.435369 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.437656 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2bx5m"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.440064 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gxcjc"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.441938 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.442759 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444192 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l8wxr"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444502 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/96ce635a-c905-4317-9f6d-64e1437d95c2-auth-proxy-config\") pod \"machine-approver-56656f9798-zd45q\" (UID: \"96ce635a-c905-4317-9f6d-64e1437d95c2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zd45q" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444535 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmpdd\" (UniqueName: \"kubernetes.io/projected/ed007985-f681-4a45-a71a-ba27798fa94d-kube-api-access-cmpdd\") pod \"cluster-samples-operator-665b6dd947-75x7z\" (UID: \"ed007985-f681-4a45-a71a-ba27798fa94d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-75x7z" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444558 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flhh8\" (UniqueName: \"kubernetes.io/projected/7540b5d1-cac8-4c3d-88f1-cd961bf8bd47-kube-api-access-flhh8\") pod \"dns-operator-744455d44c-jj6qz\" (UID: \"7540b5d1-cac8-4c3d-88f1-cd961bf8bd47\") " pod="openshift-dns-operator/dns-operator-744455d44c-jj6qz" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444583 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csx57\" (UniqueName: \"kubernetes.io/projected/156e6a74-f3a0-4ae0-8233-36da8946b7d6-kube-api-access-csx57\") pod \"machine-config-controller-84d6567774-zznr9\" (UID: \"156e6a74-f3a0-4ae0-8233-36da8946b7d6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zznr9" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444605 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft9n7\" (UniqueName: \"kubernetes.io/projected/96ce635a-c905-4317-9f6d-64e1437d95c2-kube-api-access-ft9n7\") pod \"machine-approver-56656f9798-zd45q\" (UID: \"96ce635a-c905-4317-9f6d-64e1437d95c2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zd45q" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444629 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0cddc243-3a83-4398-87a9-7a111581bec5-console-config\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444652 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5d9ac26c-eb66-4772-b7ee-a6b646092c4b-images\") pod \"machine-api-operator-5694c8668f-bwc2g\" (UID: \"5d9ac26c-eb66-4772-b7ee-a6b646092c4b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bwc2g" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444672 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0cddc243-3a83-4398-87a9-7a111581bec5-console-oauth-config\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444712 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/954567bc-27c1-40c6-8fa3-8f653f90c199-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wjjvp\" (UID: \"954567bc-27c1-40c6-8fa3-8f653f90c199\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444735 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/954567bc-27c1-40c6-8fa3-8f653f90c199-service-ca-bundle\") pod \"authentication-operator-69f744f599-wjjvp\" (UID: \"954567bc-27c1-40c6-8fa3-8f653f90c199\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444754 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-g5s5p"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444779 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s8f5\" (UniqueName: \"kubernetes.io/projected/4c48bb3f-235a-4dcd-ba1a-62f85f8946ac-kube-api-access-8s8f5\") pod \"ingress-operator-5b745b69d9-fkt75\" (UID: \"4c48bb3f-235a-4dcd-ba1a-62f85f8946ac\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444803 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed007985-f681-4a45-a71a-ba27798fa94d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-75x7z\" (UID: \"ed007985-f681-4a45-a71a-ba27798fa94d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-75x7z" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444827 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/156e6a74-f3a0-4ae0-8233-36da8946b7d6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zznr9\" (UID: \"156e6a74-f3a0-4ae0-8233-36da8946b7d6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zznr9" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444851 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/96ce635a-c905-4317-9f6d-64e1437d95c2-machine-approver-tls\") pod \"machine-approver-56656f9798-zd45q\" (UID: \"96ce635a-c905-4317-9f6d-64e1437d95c2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zd45q" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444873 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/5d9ac26c-eb66-4772-b7ee-a6b646092c4b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bwc2g\" (UID: \"5d9ac26c-eb66-4772-b7ee-a6b646092c4b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bwc2g" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444894 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4c48bb3f-235a-4dcd-ba1a-62f85f8946ac-metrics-tls\") pod \"ingress-operator-5b745b69d9-fkt75\" (UID: \"4c48bb3f-235a-4dcd-ba1a-62f85f8946ac\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444915 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4c48bb3f-235a-4dcd-ba1a-62f85f8946ac-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fkt75\" (UID: \"4c48bb3f-235a-4dcd-ba1a-62f85f8946ac\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444937 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/156e6a74-f3a0-4ae0-8233-36da8946b7d6-proxy-tls\") pod \"machine-config-controller-84d6567774-zznr9\" (UID: \"156e6a74-f3a0-4ae0-8233-36da8946b7d6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zznr9" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444958 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65c4b2d3-8915-480e-abf5-3b3e0184f778-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2jtkm\" (UID: \"65c4b2d3-8915-480e-abf5-3b3e0184f778\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2jtkm" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444980 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96ce635a-c905-4317-9f6d-64e1437d95c2-config\") pod \"machine-approver-56656f9798-zd45q\" (UID: \"96ce635a-c905-4317-9f6d-64e1437d95c2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zd45q" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.445000 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7540b5d1-cac8-4c3d-88f1-cd961bf8bd47-metrics-tls\") pod \"dns-operator-744455d44c-jj6qz\" (UID: \"7540b5d1-cac8-4c3d-88f1-cd961bf8bd47\") " pod="openshift-dns-operator/dns-operator-744455d44c-jj6qz" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.445022 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0cddc243-3a83-4398-87a9-7a111581bec5-oauth-serving-cert\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.445043 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cddc243-3a83-4398-87a9-7a111581bec5-trusted-ca-bundle\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.445066 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89hlp\" (UniqueName: \"kubernetes.io/projected/5d9ac26c-eb66-4772-b7ee-a6b646092c4b-kube-api-access-89hlp\") pod \"machine-api-operator-5694c8668f-bwc2g\" (UID: \"5d9ac26c-eb66-4772-b7ee-a6b646092c4b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bwc2g" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.445094 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/954567bc-27c1-40c6-8fa3-8f653f90c199-config\") pod \"authentication-operator-69f744f599-wjjvp\" (UID: \"954567bc-27c1-40c6-8fa3-8f653f90c199\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.445134 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0cddc243-3a83-4398-87a9-7a111581bec5-service-ca\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.445163 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9krq7\" (UniqueName: \"kubernetes.io/projected/db0aea6c-f6f8-4548-905b-22d810b334d4-kube-api-access-9krq7\") pod \"downloads-7954f5f757-bh727\" (UID: \"db0aea6c-f6f8-4548-905b-22d810b334d4\") " pod="openshift-console/downloads-7954f5f757-bh727" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.445184 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/954567bc-27c1-40c6-8fa3-8f653f90c199-serving-cert\") pod \"authentication-operator-69f744f599-wjjvp\" (UID: \"954567bc-27c1-40c6-8fa3-8f653f90c199\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.445203 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d9ac26c-eb66-4772-b7ee-a6b646092c4b-config\") pod \"machine-api-operator-5694c8668f-bwc2g\" (UID: \"5d9ac26c-eb66-4772-b7ee-a6b646092c4b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bwc2g" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.445226 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7plc\" (UniqueName: \"kubernetes.io/projected/0cddc243-3a83-4398-87a9-7a111581bec5-kube-api-access-v7plc\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.445250 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0cddc243-3a83-4398-87a9-7a111581bec5-console-serving-cert\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.445274 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65c4b2d3-8915-480e-abf5-3b3e0184f778-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2jtkm\" (UID: \"65c4b2d3-8915-480e-abf5-3b3e0184f778\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2jtkm" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.445329 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbbl5\" (UniqueName: \"kubernetes.io/projected/bb78095b-d026-498f-9616-d8365161f809-kube-api-access-tbbl5\") pod \"migrator-59844c95c7-qpplg\" (UID: \"bb78095b-d026-498f-9616-d8365161f809\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpplg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.445363 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4c48bb3f-235a-4dcd-ba1a-62f85f8946ac-trusted-ca\") pod \"ingress-operator-5b745b69d9-fkt75\" (UID: \"4c48bb3f-235a-4dcd-ba1a-62f85f8946ac\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.445422 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqfbv\" (UniqueName: \"kubernetes.io/projected/65c4b2d3-8915-480e-abf5-3b3e0184f778-kube-api-access-qqfbv\") pod \"openshift-controller-manager-operator-756b6f6bc6-2jtkm\" (UID: \"65c4b2d3-8915-480e-abf5-3b3e0184f778\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2jtkm" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.445455 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7b2p\" (UniqueName: \"kubernetes.io/projected/954567bc-27c1-40c6-8fa3-8f653f90c199-kube-api-access-p7b2p\") pod \"authentication-operator-69f744f599-wjjvp\" (UID: \"954567bc-27c1-40c6-8fa3-8f653f90c199\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.446507 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/96ce635a-c905-4317-9f6d-64e1437d95c2-auth-proxy-config\") pod \"machine-approver-56656f9798-zd45q\" (UID: \"96ce635a-c905-4317-9f6d-64e1437d95c2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zd45q" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.446822 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96ce635a-c905-4317-9f6d-64e1437d95c2-config\") pod \"machine-approver-56656f9798-zd45q\" (UID: \"96ce635a-c905-4317-9f6d-64e1437d95c2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zd45q" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.447301 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0cddc243-3a83-4398-87a9-7a111581bec5-console-config\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.447813 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-852vg"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.447844 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mjbd9"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.447853 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-l8kn4"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.447974 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5d9ac26c-eb66-4772-b7ee-a6b646092c4b-images\") pod \"machine-api-operator-5694c8668f-bwc2g\" (UID: \"5d9ac26c-eb66-4772-b7ee-a6b646092c4b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bwc2g" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.448054 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0cddc243-3a83-4398-87a9-7a111581bec5-oauth-serving-cert\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.448667 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/954567bc-27c1-40c6-8fa3-8f653f90c199-config\") pod \"authentication-operator-69f744f599-wjjvp\" (UID: \"954567bc-27c1-40c6-8fa3-8f653f90c199\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.448829 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cddc243-3a83-4398-87a9-7a111581bec5-trusted-ca-bundle\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.449209 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0cddc243-3a83-4398-87a9-7a111581bec5-service-ca\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.449269 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/5d9ac26c-eb66-4772-b7ee-a6b646092c4b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bwc2g\" (UID: \"5d9ac26c-eb66-4772-b7ee-a6b646092c4b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bwc2g" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.450123 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j7kx4"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.450342 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/954567bc-27c1-40c6-8fa3-8f653f90c199-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wjjvp\" (UID: \"954567bc-27c1-40c6-8fa3-8f653f90c199\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.450634 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/954567bc-27c1-40c6-8fa3-8f653f90c199-service-ca-bundle\") pod \"authentication-operator-69f744f599-wjjvp\" (UID: \"954567bc-27c1-40c6-8fa3-8f653f90c199\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.450801 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d9ac26c-eb66-4772-b7ee-a6b646092c4b-config\") pod \"machine-api-operator-5694c8668f-bwc2g\" (UID: \"5d9ac26c-eb66-4772-b7ee-a6b646092c4b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bwc2g" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.450968 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0cddc243-3a83-4398-87a9-7a111581bec5-console-oauth-config\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.451205 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7540b5d1-cac8-4c3d-88f1-cd961bf8bd47-metrics-tls\") pod \"dns-operator-744455d44c-jj6qz\" (UID: \"7540b5d1-cac8-4c3d-88f1-cd961bf8bd47\") " pod="openshift-dns-operator/dns-operator-744455d44c-jj6qz" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.451247 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z8wss"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.451309 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/156e6a74-f3a0-4ae0-8233-36da8946b7d6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zznr9\" (UID: \"156e6a74-f3a0-4ae0-8233-36da8946b7d6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zznr9" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.451316 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/96ce635a-c905-4317-9f6d-64e1437d95c2-machine-approver-tls\") pod \"machine-approver-56656f9798-zd45q\" (UID: \"96ce635a-c905-4317-9f6d-64e1437d95c2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zd45q" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.452215 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dncp9"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.452504 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/954567bc-27c1-40c6-8fa3-8f653f90c199-serving-cert\") pod \"authentication-operator-69f744f599-wjjvp\" (UID: \"954567bc-27c1-40c6-8fa3-8f653f90c199\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.452746 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed007985-f681-4a45-a71a-ba27798fa94d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-75x7z\" (UID: \"ed007985-f681-4a45-a71a-ba27798fa94d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-75x7z" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.453561 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0cddc243-3a83-4398-87a9-7a111581bec5-console-serving-cert\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.453653 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.456084 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5kfwr"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.457347 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.458222 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.459408 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-5q274"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.460110 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5q274" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.460375 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5q274"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.463620 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.480883 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.500640 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.520957 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.540867 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.549193 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4c48bb3f-235a-4dcd-ba1a-62f85f8946ac-metrics-tls\") pod \"ingress-operator-5b745b69d9-fkt75\" (UID: \"4c48bb3f-235a-4dcd-ba1a-62f85f8946ac\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.568546 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.570991 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4c48bb3f-235a-4dcd-ba1a-62f85f8946ac-trusted-ca\") pod \"ingress-operator-5b745b69d9-fkt75\" (UID: \"4c48bb3f-235a-4dcd-ba1a-62f85f8946ac\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.580294 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.600676 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.620413 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.641797 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.661131 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.681203 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.701776 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.720533 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.730545 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65c4b2d3-8915-480e-abf5-3b3e0184f778-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2jtkm\" (UID: \"65c4b2d3-8915-480e-abf5-3b3e0184f778\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2jtkm" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.741171 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.751339 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65c4b2d3-8915-480e-abf5-3b3e0184f778-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2jtkm\" (UID: \"65c4b2d3-8915-480e-abf5-3b3e0184f778\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2jtkm" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.761091 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.781688 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.805444 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.821093 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.842188 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.861144 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.881949 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.901672 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.922278 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.941284 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.961004 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.982435 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.001178 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.020274 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.040981 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.061513 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.102133 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.120813 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.130955 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/156e6a74-f3a0-4ae0-8233-36da8946b7d6-proxy-tls\") pod \"machine-config-controller-84d6567774-zznr9\" (UID: \"156e6a74-f3a0-4ae0-8233-36da8946b7d6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zznr9" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.141318 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.162004 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.181557 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.201067 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.221858 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.241389 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.270321 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.281192 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.299778 4763 request.go:700] Waited for 1.00956254s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-metrics&limit=500&resourceVersion=0 Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.301582 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.321750 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.341092 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.367096 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.381003 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.401457 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.421513 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.441275 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.460817 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.482148 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.501185 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.520566 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.541202 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.561201 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.580950 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.602063 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.621866 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.641396 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.662063 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.681954 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.702556 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.721952 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.742127 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.762462 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.782339 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.801062 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.820683 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.841246 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.862006 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.882054 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.900561 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.921042 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.942066 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.961355 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.982042 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.021789 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.031233 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5csj\" (UniqueName: \"kubernetes.io/projected/b67dedfb-accc-467d-a3bb-508eab4f88c8-kube-api-access-j5csj\") pod \"controller-manager-879f6c89f-bpxtg\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.042320 4763 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.061194 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.101368 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.121159 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.141961 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.192570 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7b2p\" (UniqueName: \"kubernetes.io/projected/954567bc-27c1-40c6-8fa3-8f653f90c199-kube-api-access-p7b2p\") pod \"authentication-operator-69f744f599-wjjvp\" (UID: \"954567bc-27c1-40c6-8fa3-8f653f90c199\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.210302 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4c48bb3f-235a-4dcd-ba1a-62f85f8946ac-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fkt75\" (UID: \"4c48bb3f-235a-4dcd-ba1a-62f85f8946ac\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.228968 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmpdd\" (UniqueName: \"kubernetes.io/projected/ed007985-f681-4a45-a71a-ba27798fa94d-kube-api-access-cmpdd\") pod \"cluster-samples-operator-665b6dd947-75x7z\" (UID: \"ed007985-f681-4a45-a71a-ba27798fa94d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-75x7z" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.248714 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flhh8\" (UniqueName: \"kubernetes.io/projected/7540b5d1-cac8-4c3d-88f1-cd961bf8bd47-kube-api-access-flhh8\") pod \"dns-operator-744455d44c-jj6qz\" (UID: \"7540b5d1-cac8-4c3d-88f1-cd961bf8bd47\") " pod="openshift-dns-operator/dns-operator-744455d44c-jj6qz" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.249427 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.261447 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csx57\" (UniqueName: \"kubernetes.io/projected/156e6a74-f3a0-4ae0-8233-36da8946b7d6-kube-api-access-csx57\") pod \"machine-config-controller-84d6567774-zznr9\" (UID: \"156e6a74-f3a0-4ae0-8233-36da8946b7d6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zznr9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.282850 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft9n7\" (UniqueName: \"kubernetes.io/projected/96ce635a-c905-4317-9f6d-64e1437d95c2-kube-api-access-ft9n7\") pod \"machine-approver-56656f9798-zd45q\" (UID: \"96ce635a-c905-4317-9f6d-64e1437d95c2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zd45q" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.285791 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.296710 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89hlp\" (UniqueName: \"kubernetes.io/projected/5d9ac26c-eb66-4772-b7ee-a6b646092c4b-kube-api-access-89hlp\") pod \"machine-api-operator-5694c8668f-bwc2g\" (UID: \"5d9ac26c-eb66-4772-b7ee-a6b646092c4b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bwc2g" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.319266 4763 request.go:700] Waited for 1.868969302s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/serviceaccounts/console/token Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.320801 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9krq7\" (UniqueName: \"kubernetes.io/projected/db0aea6c-f6f8-4548-905b-22d810b334d4-kube-api-access-9krq7\") pod \"downloads-7954f5f757-bh727\" (UID: \"db0aea6c-f6f8-4548-905b-22d810b334d4\") " pod="openshift-console/downloads-7954f5f757-bh727" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.350124 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7plc\" (UniqueName: \"kubernetes.io/projected/0cddc243-3a83-4398-87a9-7a111581bec5-kube-api-access-v7plc\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.356426 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zznr9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.366345 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqfbv\" (UniqueName: \"kubernetes.io/projected/65c4b2d3-8915-480e-abf5-3b3e0184f778-kube-api-access-qqfbv\") pod \"openshift-controller-manager-operator-756b6f6bc6-2jtkm\" (UID: \"65c4b2d3-8915-480e-abf5-3b3e0184f778\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2jtkm" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.379668 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.389251 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s8f5\" (UniqueName: \"kubernetes.io/projected/4c48bb3f-235a-4dcd-ba1a-62f85f8946ac-kube-api-access-8s8f5\") pod \"ingress-operator-5b745b69d9-fkt75\" (UID: \"4c48bb3f-235a-4dcd-ba1a-62f85f8946ac\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.389352 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-bwc2g" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.400381 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbbl5\" (UniqueName: \"kubernetes.io/projected/bb78095b-d026-498f-9616-d8365161f809-kube-api-access-tbbl5\") pod \"migrator-59844c95c7-qpplg\" (UID: \"bb78095b-d026-498f-9616-d8365161f809\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpplg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.401107 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.421465 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.441847 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.460795 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.471025 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zd45q" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.500890 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wjjvp"] Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.501458 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-75x7z" Jan 31 14:57:01 crc kubenswrapper[4763]: W0131 14:57:01.532849 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod954567bc_27c1_40c6_8fa3_8f653f90c199.slice/crio-da9911c212d0087ba033bb74fab71ce73c49d040572cf055075f1e14bb37f205 WatchSource:0}: Error finding container da9911c212d0087ba033bb74fab71ce73c49d040572cf055075f1e14bb37f205: Status 404 returned error can't find the container with id da9911c212d0087ba033bb74fab71ce73c49d040572cf055075f1e14bb37f205 Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.540200 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-jj6qz" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.547813 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bpxtg"] Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571447 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2c4bb39-a442-4316-81a9-d5e8f9d10eaa-serving-cert\") pod \"console-operator-58897d9998-mjbd9\" (UID: \"d2c4bb39-a442-4316-81a9-d5e8f9d10eaa\") " pod="openshift-console-operator/console-operator-58897d9998-mjbd9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571478 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44m57\" (UniqueName: \"kubernetes.io/projected/ac92922f-89ed-41e7-bf6f-9750efc9cab0-kube-api-access-44m57\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571510 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31cdca6f-11b2-4888-9a4c-4b06a94d1863-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2bx5m\" (UID: \"31cdca6f-11b2-4888-9a4c-4b06a94d1863\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2bx5m" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571525 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/330d3fd9-790f-406d-a122-152a1ab07e5c-etcd-client\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571549 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6f56211-548f-4d20-9c0a-70108a8f557b-config\") pod \"kube-controller-manager-operator-78b949d7b-8gxz5\" (UID: \"f6f56211-548f-4d20-9c0a-70108a8f557b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8gxz5" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571563 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/22ff872b-afc9-4fa7-812b-f47bb3add27c-images\") pod \"machine-config-operator-74547568cd-rltbk\" (UID: \"22ff872b-afc9-4fa7-812b-f47bb3add27c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571577 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/330d3fd9-790f-406d-a122-152a1ab07e5c-config\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571592 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj96k\" (UniqueName: \"kubernetes.io/projected/d2c4bb39-a442-4316-81a9-d5e8f9d10eaa-kube-api-access-mj96k\") pod \"console-operator-58897d9998-mjbd9\" (UID: \"d2c4bb39-a442-4316-81a9-d5e8f9d10eaa\") " pod="openshift-console-operator/console-operator-58897d9998-mjbd9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571607 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571622 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p6kd\" (UniqueName: \"kubernetes.io/projected/2435f2fb-db3e-4a5a-910d-b2bdebfff9ed-kube-api-access-8p6kd\") pod \"etcd-operator-b45778765-nzj54\" (UID: \"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571636 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac92922f-89ed-41e7-bf6f-9750efc9cab0-serving-cert\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571652 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8edb2bab-1e72-4b68-afed-2de0572a1071-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-c2zm8\" (UID: \"8edb2bab-1e72-4b68-afed-2de0572a1071\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2zm8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571666 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/47ac991e-3a26-4da1-9cf0-6f0944a3bf7b-stats-auth\") pod \"router-default-5444994796-87f9c\" (UID: \"47ac991e-3a26-4da1-9cf0-6f0944a3bf7b\") " pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571679 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/330d3fd9-790f-406d-a122-152a1ab07e5c-audit-dir\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571708 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctc8x\" (UniqueName: \"kubernetes.io/projected/47ac991e-3a26-4da1-9cf0-6f0944a3bf7b-kube-api-access-ctc8x\") pod \"router-default-5444994796-87f9c\" (UID: \"47ac991e-3a26-4da1-9cf0-6f0944a3bf7b\") " pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571723 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/330d3fd9-790f-406d-a122-152a1ab07e5c-etcd-serving-ca\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571739 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/329bb364-3958-490e-b065-d2ce7ee1567d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571755 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ac92922f-89ed-41e7-bf6f-9750efc9cab0-audit-policies\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571770 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac92922f-89ed-41e7-bf6f-9750efc9cab0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571785 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqmrv\" (UniqueName: \"kubernetes.io/projected/31cdca6f-11b2-4888-9a4c-4b06a94d1863-kube-api-access-xqmrv\") pod \"openshift-apiserver-operator-796bbdcf4f-2bx5m\" (UID: \"31cdca6f-11b2-4888-9a4c-4b06a94d1863\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2bx5m" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571805 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac92922f-89ed-41e7-bf6f-9750efc9cab0-audit-dir\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571819 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/229488e3-89a8-4eb4-841e-980db3f8cfb3-serving-cert\") pod \"openshift-config-operator-7777fb866f-5zjq4\" (UID: \"229488e3-89a8-4eb4-841e-980db3f8cfb3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571832 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ac92922f-89ed-41e7-bf6f-9750efc9cab0-etcd-client\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571845 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/329bb364-3958-490e-b065-d2ce7ee1567d-trusted-ca\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571859 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571875 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/330d3fd9-790f-406d-a122-152a1ab07e5c-serving-cert\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571890 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/329bb364-3958-490e-b065-d2ce7ee1567d-bound-sa-token\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571907 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571920 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ac92922f-89ed-41e7-bf6f-9750efc9cab0-encryption-config\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571934 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/330d3fd9-790f-406d-a122-152a1ab07e5c-audit\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571951 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571965 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571982 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571998 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d2c4bb39-a442-4316-81a9-d5e8f9d10eaa-trusted-ca\") pod \"console-operator-58897d9998-mjbd9\" (UID: \"d2c4bb39-a442-4316-81a9-d5e8f9d10eaa\") " pod="openshift-console-operator/console-operator-58897d9998-mjbd9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572015 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2df2f\" (UniqueName: \"kubernetes.io/projected/329bb364-3958-490e-b065-d2ce7ee1567d-kube-api-access-2df2f\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572029 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2435f2fb-db3e-4a5a-910d-b2bdebfff9ed-etcd-ca\") pod \"etcd-operator-b45778765-nzj54\" (UID: \"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572042 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572059 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2435f2fb-db3e-4a5a-910d-b2bdebfff9ed-etcd-service-ca\") pod \"etcd-operator-b45778765-nzj54\" (UID: \"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572072 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2435f2fb-db3e-4a5a-910d-b2bdebfff9ed-etcd-client\") pod \"etcd-operator-b45778765-nzj54\" (UID: \"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572087 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ac92922f-89ed-41e7-bf6f-9750efc9cab0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572102 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22ff872b-afc9-4fa7-812b-f47bb3add27c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rltbk\" (UID: \"22ff872b-afc9-4fa7-812b-f47bb3add27c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572136 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/330d3fd9-790f-406d-a122-152a1ab07e5c-node-pullsecrets\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572149 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6f56211-548f-4d20-9c0a-70108a8f557b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8gxz5\" (UID: \"f6f56211-548f-4d20-9c0a-70108a8f557b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8gxz5" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572164 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/330d3fd9-790f-406d-a122-152a1ab07e5c-image-import-ca\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572177 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31cdca6f-11b2-4888-9a4c-4b06a94d1863-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2bx5m\" (UID: \"31cdca6f-11b2-4888-9a4c-4b06a94d1863\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2bx5m" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572201 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/330d3fd9-790f-406d-a122-152a1ab07e5c-encryption-config\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572215 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47ac991e-3a26-4da1-9cf0-6f0944a3bf7b-service-ca-bundle\") pod \"router-default-5444994796-87f9c\" (UID: \"47ac991e-3a26-4da1-9cf0-6f0944a3bf7b\") " pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572229 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1b409a5-8274-478d-98bf-fe2171d90c63-client-ca\") pod \"route-controller-manager-6576b87f9c-mpmpg\" (UID: \"e1b409a5-8274-478d-98bf-fe2171d90c63\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572253 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2c4bb39-a442-4316-81a9-d5e8f9d10eaa-config\") pod \"console-operator-58897d9998-mjbd9\" (UID: \"d2c4bb39-a442-4316-81a9-d5e8f9d10eaa\") " pod="openshift-console-operator/console-operator-58897d9998-mjbd9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572274 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/47ac991e-3a26-4da1-9cf0-6f0944a3bf7b-metrics-certs\") pod \"router-default-5444994796-87f9c\" (UID: \"47ac991e-3a26-4da1-9cf0-6f0944a3bf7b\") " pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572288 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572303 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/22ff872b-afc9-4fa7-812b-f47bb3add27c-proxy-tls\") pod \"machine-config-operator-74547568cd-rltbk\" (UID: \"22ff872b-afc9-4fa7-812b-f47bb3add27c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572318 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkmj5\" (UniqueName: \"kubernetes.io/projected/22ff872b-afc9-4fa7-812b-f47bb3add27c-kube-api-access-bkmj5\") pod \"machine-config-operator-74547568cd-rltbk\" (UID: \"22ff872b-afc9-4fa7-812b-f47bb3add27c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572335 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/275ea46d-7a78-4457-a5ba-7b3000170d0e-audit-dir\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572352 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572366 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/329bb364-3958-490e-b065-d2ce7ee1567d-registry-tls\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572380 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8edb2bab-1e72-4b68-afed-2de0572a1071-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-c2zm8\" (UID: \"8edb2bab-1e72-4b68-afed-2de0572a1071\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2zm8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572395 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/229488e3-89a8-4eb4-841e-980db3f8cfb3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5zjq4\" (UID: \"229488e3-89a8-4eb4-841e-980db3f8cfb3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572420 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7cfv\" (UniqueName: \"kubernetes.io/projected/e1b409a5-8274-478d-98bf-fe2171d90c63-kube-api-access-v7cfv\") pod \"route-controller-manager-6576b87f9c-mpmpg\" (UID: \"e1b409a5-8274-478d-98bf-fe2171d90c63\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572445 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llk9p\" (UniqueName: \"kubernetes.io/projected/275ea46d-7a78-4457-a5ba-7b3000170d0e-kube-api-access-llk9p\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572459 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8edb2bab-1e72-4b68-afed-2de0572a1071-config\") pod \"kube-apiserver-operator-766d6c64bb-c2zm8\" (UID: \"8edb2bab-1e72-4b68-afed-2de0572a1071\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2zm8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572485 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572500 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1b409a5-8274-478d-98bf-fe2171d90c63-config\") pod \"route-controller-manager-6576b87f9c-mpmpg\" (UID: \"e1b409a5-8274-478d-98bf-fe2171d90c63\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572514 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/329bb364-3958-490e-b065-d2ce7ee1567d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572528 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6f56211-548f-4d20-9c0a-70108a8f557b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8gxz5\" (UID: \"f6f56211-548f-4d20-9c0a-70108a8f557b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8gxz5" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572542 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572556 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/330d3fd9-790f-406d-a122-152a1ab07e5c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572572 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572588 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2435f2fb-db3e-4a5a-910d-b2bdebfff9ed-serving-cert\") pod \"etcd-operator-b45778765-nzj54\" (UID: \"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572604 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-audit-policies\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572621 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1b409a5-8274-478d-98bf-fe2171d90c63-serving-cert\") pod \"route-controller-manager-6576b87f9c-mpmpg\" (UID: \"e1b409a5-8274-478d-98bf-fe2171d90c63\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572635 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/47ac991e-3a26-4da1-9cf0-6f0944a3bf7b-default-certificate\") pod \"router-default-5444994796-87f9c\" (UID: \"47ac991e-3a26-4da1-9cf0-6f0944a3bf7b\") " pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572650 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjb6g\" (UniqueName: \"kubernetes.io/projected/229488e3-89a8-4eb4-841e-980db3f8cfb3-kube-api-access-cjb6g\") pod \"openshift-config-operator-7777fb866f-5zjq4\" (UID: \"229488e3-89a8-4eb4-841e-980db3f8cfb3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572667 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2435f2fb-db3e-4a5a-910d-b2bdebfff9ed-config\") pod \"etcd-operator-b45778765-nzj54\" (UID: \"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572685 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/329bb364-3958-490e-b065-d2ce7ee1567d-registry-certificates\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572716 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgrjn\" (UniqueName: \"kubernetes.io/projected/330d3fd9-790f-406d-a122-152a1ab07e5c-kube-api-access-mgrjn\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: E0131 14:57:01.573276 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:02.07325966 +0000 UTC m=+141.827997953 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.592193 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-bh727" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.603175 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpplg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.607920 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.622672 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2jtkm" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.675319 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.675584 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2df2f\" (UniqueName: \"kubernetes.io/projected/329bb364-3958-490e-b065-d2ce7ee1567d-kube-api-access-2df2f\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.675627 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2435f2fb-db3e-4a5a-910d-b2bdebfff9ed-etcd-ca\") pod \"etcd-operator-b45778765-nzj54\" (UID: \"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.675655 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.675688 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krzb2\" (UniqueName: \"kubernetes.io/projected/1113d5ad-40c9-412f-92c2-2fb0d6ec2903-kube-api-access-krzb2\") pod \"dns-default-l8kn4\" (UID: \"1113d5ad-40c9-412f-92c2-2fb0d6ec2903\") " pod="openshift-dns/dns-default-l8kn4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.675734 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e043d261-8774-411b-be6d-98dbb1f210a2-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jtt2l\" (UID: \"e043d261-8774-411b-be6d-98dbb1f210a2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.675757 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbsgk\" (UniqueName: \"kubernetes.io/projected/92ef1804-52cd-46a1-86e1-baf561981f8b-kube-api-access-zbsgk\") pod \"ingress-canary-5q274\" (UID: \"92ef1804-52cd-46a1-86e1-baf561981f8b\") " pod="openshift-ingress-canary/ingress-canary-5q274" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.675787 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2435f2fb-db3e-4a5a-910d-b2bdebfff9ed-etcd-service-ca\") pod \"etcd-operator-b45778765-nzj54\" (UID: \"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.676365 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bwc2g"] Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.677067 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2435f2fb-db3e-4a5a-910d-b2bdebfff9ed-etcd-ca\") pod \"etcd-operator-b45778765-nzj54\" (UID: \"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:57:01 crc kubenswrapper[4763]: E0131 14:57:01.677141 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:02.177127773 +0000 UTC m=+141.931866066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678045 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2435f2fb-db3e-4a5a-910d-b2bdebfff9ed-etcd-client\") pod \"etcd-operator-b45778765-nzj54\" (UID: \"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678105 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20c40a34-73d2-4a28-b2bd-31e19e6361d2-secret-volume\") pod \"collect-profiles-29497845-zn989\" (UID: \"20c40a34-73d2-4a28-b2bd-31e19e6361d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678141 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ac92922f-89ed-41e7-bf6f-9750efc9cab0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678164 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301a24de-a6b1-45a1-a12d-663325e45fd6-profile-collector-cert\") pod \"catalog-operator-68c6474976-4fhtj\" (UID: \"301a24de-a6b1-45a1-a12d-663325e45fd6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678187 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f313ecec-c631-4270-a297-51e482e3e306-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-z8wss\" (UID: \"f313ecec-c631-4270-a297-51e482e3e306\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z8wss" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678217 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22ff872b-afc9-4fa7-812b-f47bb3add27c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rltbk\" (UID: \"22ff872b-afc9-4fa7-812b-f47bb3add27c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678241 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1dc5ca38-64fe-41f8-a989-0b035bf29414-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-j7kx4\" (UID: \"1dc5ca38-64fe-41f8-a989-0b035bf29414\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j7kx4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678268 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/330d3fd9-790f-406d-a122-152a1ab07e5c-node-pullsecrets\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678293 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a7826828-7856-44a4-be9f-f1a939950c3e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dncp9\" (UID: \"a7826828-7856-44a4-be9f-f1a939950c3e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dncp9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678319 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31cdca6f-11b2-4888-9a4c-4b06a94d1863-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2bx5m\" (UID: \"31cdca6f-11b2-4888-9a4c-4b06a94d1863\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2bx5m" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678343 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6f56211-548f-4d20-9c0a-70108a8f557b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8gxz5\" (UID: \"f6f56211-548f-4d20-9c0a-70108a8f557b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8gxz5" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678372 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/330d3fd9-790f-406d-a122-152a1ab07e5c-image-import-ca\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678411 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-flcgf\" (UID: \"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc\") " pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678436 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/92ef1804-52cd-46a1-86e1-baf561981f8b-cert\") pod \"ingress-canary-5q274\" (UID: \"92ef1804-52cd-46a1-86e1-baf561981f8b\") " pod="openshift-ingress-canary/ingress-canary-5q274" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678464 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvtqq\" (UniqueName: \"kubernetes.io/projected/80834dac-7e21-4dda-8f32-3a19eced5753-kube-api-access-vvtqq\") pod \"multus-admission-controller-857f4d67dd-852vg\" (UID: \"80834dac-7e21-4dda-8f32-3a19eced5753\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-852vg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678490 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/330d3fd9-790f-406d-a122-152a1ab07e5c-encryption-config\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678512 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-flcgf\" (UID: \"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc\") " pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678539 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47ac991e-3a26-4da1-9cf0-6f0944a3bf7b-service-ca-bundle\") pod \"router-default-5444994796-87f9c\" (UID: \"47ac991e-3a26-4da1-9cf0-6f0944a3bf7b\") " pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678563 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1b409a5-8274-478d-98bf-fe2171d90c63-client-ca\") pod \"route-controller-manager-6576b87f9c-mpmpg\" (UID: \"e1b409a5-8274-478d-98bf-fe2171d90c63\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678638 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2c4bb39-a442-4316-81a9-d5e8f9d10eaa-config\") pod \"console-operator-58897d9998-mjbd9\" (UID: \"d2c4bb39-a442-4316-81a9-d5e8f9d10eaa\") " pod="openshift-console-operator/console-operator-58897d9998-mjbd9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678667 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6f2d8117-c3e5-498f-8458-e72238d0f0ac-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-l8wxr\" (UID: \"6f2d8117-c3e5-498f-8458-e72238d0f0ac\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l8wxr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678711 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a766d0cf-2406-4406-aaec-51a9da3d6b55-certs\") pod \"machine-config-server-w9cb6\" (UID: \"a766d0cf-2406-4406-aaec-51a9da3d6b55\") " pod="openshift-machine-config-operator/machine-config-server-w9cb6" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678754 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678779 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/47ac991e-3a26-4da1-9cf0-6f0944a3bf7b-metrics-certs\") pod \"router-default-5444994796-87f9c\" (UID: \"47ac991e-3a26-4da1-9cf0-6f0944a3bf7b\") " pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678800 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/22ff872b-afc9-4fa7-812b-f47bb3add27c-proxy-tls\") pod \"machine-config-operator-74547568cd-rltbk\" (UID: \"22ff872b-afc9-4fa7-812b-f47bb3add27c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678824 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkmj5\" (UniqueName: \"kubernetes.io/projected/22ff872b-afc9-4fa7-812b-f47bb3add27c-kube-api-access-bkmj5\") pod \"machine-config-operator-74547568cd-rltbk\" (UID: \"22ff872b-afc9-4fa7-812b-f47bb3add27c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678849 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/275ea46d-7a78-4457-a5ba-7b3000170d0e-audit-dir\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678876 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678901 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e043d261-8774-411b-be6d-98dbb1f210a2-srv-cert\") pod \"olm-operator-6b444d44fb-jtt2l\" (UID: \"e043d261-8774-411b-be6d-98dbb1f210a2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678954 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/329bb364-3958-490e-b065-d2ce7ee1567d-registry-tls\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678976 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/84d11c6f-169b-4e21-87ec-8bb8930a1831-apiservice-cert\") pod \"packageserver-d55dfcdfc-f7fgc\" (UID: \"84d11c6f-169b-4e21-87ec-8bb8930a1831\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679015 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8edb2bab-1e72-4b68-afed-2de0572a1071-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-c2zm8\" (UID: \"8edb2bab-1e72-4b68-afed-2de0572a1071\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2zm8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679063 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/229488e3-89a8-4eb4-841e-980db3f8cfb3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5zjq4\" (UID: \"229488e3-89a8-4eb4-841e-980db3f8cfb3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679085 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e6758143-5085-416e-9bdc-856a520c71de-signing-key\") pod \"service-ca-9c57cc56f-gxcjc\" (UID: \"e6758143-5085-416e-9bdc-856a520c71de\") " pod="openshift-service-ca/service-ca-9c57cc56f-gxcjc" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679108 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swj8t\" (UniqueName: \"kubernetes.io/projected/e6758143-5085-416e-9bdc-856a520c71de-kube-api-access-swj8t\") pod \"service-ca-9c57cc56f-gxcjc\" (UID: \"e6758143-5085-416e-9bdc-856a520c71de\") " pod="openshift-service-ca/service-ca-9c57cc56f-gxcjc" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679136 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7cfv\" (UniqueName: \"kubernetes.io/projected/e1b409a5-8274-478d-98bf-fe2171d90c63-kube-api-access-v7cfv\") pod \"route-controller-manager-6576b87f9c-mpmpg\" (UID: \"e1b409a5-8274-478d-98bf-fe2171d90c63\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679169 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/84d11c6f-169b-4e21-87ec-8bb8930a1831-webhook-cert\") pod \"packageserver-d55dfcdfc-f7fgc\" (UID: \"84d11c6f-169b-4e21-87ec-8bb8930a1831\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679192 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09c58528-2088-4902-ab32-10cd90be0562-config\") pod \"service-ca-operator-777779d784-g5s5p\" (UID: \"09c58528-2088-4902-ab32-10cd90be0562\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5s5p" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679217 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llk9p\" (UniqueName: \"kubernetes.io/projected/275ea46d-7a78-4457-a5ba-7b3000170d0e-kube-api-access-llk9p\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679257 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8edb2bab-1e72-4b68-afed-2de0572a1071-config\") pod \"kube-apiserver-operator-766d6c64bb-c2zm8\" (UID: \"8edb2bab-1e72-4b68-afed-2de0572a1071\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2zm8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679312 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679337 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/329bb364-3958-490e-b065-d2ce7ee1567d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679359 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1b409a5-8274-478d-98bf-fe2171d90c63-config\") pod \"route-controller-manager-6576b87f9c-mpmpg\" (UID: \"e1b409a5-8274-478d-98bf-fe2171d90c63\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679382 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dc5ca38-64fe-41f8-a989-0b035bf29414-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-j7kx4\" (UID: \"1dc5ca38-64fe-41f8-a989-0b035bf29414\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j7kx4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679405 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjksc\" (UniqueName: \"kubernetes.io/projected/f313ecec-c631-4270-a297-51e482e3e306-kube-api-access-zjksc\") pod \"kube-storage-version-migrator-operator-b67b599dd-z8wss\" (UID: \"f313ecec-c631-4270-a297-51e482e3e306\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z8wss" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679442 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/330d3fd9-790f-406d-a122-152a1ab07e5c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679467 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6f56211-548f-4d20-9c0a-70108a8f557b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8gxz5\" (UID: \"f6f56211-548f-4d20-9c0a-70108a8f557b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8gxz5" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679488 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679508 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2435f2fb-db3e-4a5a-910d-b2bdebfff9ed-serving-cert\") pod \"etcd-operator-b45778765-nzj54\" (UID: \"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679531 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2rjv\" (UniqueName: \"kubernetes.io/projected/a766d0cf-2406-4406-aaec-51a9da3d6b55-kube-api-access-x2rjv\") pod \"machine-config-server-w9cb6\" (UID: \"a766d0cf-2406-4406-aaec-51a9da3d6b55\") " pod="openshift-machine-config-operator/machine-config-server-w9cb6" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679555 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679578 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-audit-policies\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679600 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/47ac991e-3a26-4da1-9cf0-6f0944a3bf7b-default-certificate\") pod \"router-default-5444994796-87f9c\" (UID: \"47ac991e-3a26-4da1-9cf0-6f0944a3bf7b\") " pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679621 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjb6g\" (UniqueName: \"kubernetes.io/projected/229488e3-89a8-4eb4-841e-980db3f8cfb3-kube-api-access-cjb6g\") pod \"openshift-config-operator-7777fb866f-5zjq4\" (UID: \"229488e3-89a8-4eb4-841e-980db3f8cfb3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679644 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1b409a5-8274-478d-98bf-fe2171d90c63-serving-cert\") pod \"route-controller-manager-6576b87f9c-mpmpg\" (UID: \"e1b409a5-8274-478d-98bf-fe2171d90c63\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679667 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-665h2\" (UniqueName: \"kubernetes.io/projected/84d11c6f-169b-4e21-87ec-8bb8930a1831-kube-api-access-665h2\") pod \"packageserver-d55dfcdfc-f7fgc\" (UID: \"84d11c6f-169b-4e21-87ec-8bb8930a1831\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679740 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6jtr\" (UniqueName: \"kubernetes.io/projected/a7826828-7856-44a4-be9f-f1a939950c3e-kube-api-access-p6jtr\") pod \"control-plane-machine-set-operator-78cbb6b69f-dncp9\" (UID: \"a7826828-7856-44a4-be9f-f1a939950c3e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dncp9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679764 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d73a5142-56cf-4676-a6f1-a00868938c4d-plugins-dir\") pod \"csi-hostpathplugin-5kfwr\" (UID: \"d73a5142-56cf-4676-a6f1-a00868938c4d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679792 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f313ecec-c631-4270-a297-51e482e3e306-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-z8wss\" (UID: \"f313ecec-c631-4270-a297-51e482e3e306\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z8wss" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679819 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2435f2fb-db3e-4a5a-910d-b2bdebfff9ed-config\") pod \"etcd-operator-b45778765-nzj54\" (UID: \"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679848 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d73a5142-56cf-4676-a6f1-a00868938c4d-socket-dir\") pod \"csi-hostpathplugin-5kfwr\" (UID: \"d73a5142-56cf-4676-a6f1-a00868938c4d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679887 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/329bb364-3958-490e-b065-d2ce7ee1567d-registry-certificates\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679910 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgrjn\" (UniqueName: \"kubernetes.io/projected/330d3fd9-790f-406d-a122-152a1ab07e5c-kube-api-access-mgrjn\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679932 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d73a5142-56cf-4676-a6f1-a00868938c4d-csi-data-dir\") pod \"csi-hostpathplugin-5kfwr\" (UID: \"d73a5142-56cf-4676-a6f1-a00868938c4d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679957 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-9zjqh\" (UID: \"74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679982 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2c4bb39-a442-4316-81a9-d5e8f9d10eaa-serving-cert\") pod \"console-operator-58897d9998-mjbd9\" (UID: \"d2c4bb39-a442-4316-81a9-d5e8f9d10eaa\") " pod="openshift-console-operator/console-operator-58897d9998-mjbd9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680003 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44m57\" (UniqueName: \"kubernetes.io/projected/ac92922f-89ed-41e7-bf6f-9750efc9cab0-kube-api-access-44m57\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680026 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gq7w\" (UniqueName: \"kubernetes.io/projected/20c40a34-73d2-4a28-b2bd-31e19e6361d2-kube-api-access-4gq7w\") pod \"collect-profiles-29497845-zn989\" (UID: \"20c40a34-73d2-4a28-b2bd-31e19e6361d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680084 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1113d5ad-40c9-412f-92c2-2fb0d6ec2903-metrics-tls\") pod \"dns-default-l8kn4\" (UID: \"1113d5ad-40c9-412f-92c2-2fb0d6ec2903\") " pod="openshift-dns/dns-default-l8kn4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680110 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31cdca6f-11b2-4888-9a4c-4b06a94d1863-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2bx5m\" (UID: \"31cdca6f-11b2-4888-9a4c-4b06a94d1863\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2bx5m" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680132 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfdsx\" (UniqueName: \"kubernetes.io/projected/c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc-kube-api-access-sfdsx\") pod \"marketplace-operator-79b997595-flcgf\" (UID: \"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc\") " pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680151 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e6758143-5085-416e-9bdc-856a520c71de-signing-cabundle\") pod \"service-ca-9c57cc56f-gxcjc\" (UID: \"e6758143-5085-416e-9bdc-856a520c71de\") " pod="openshift-service-ca/service-ca-9c57cc56f-gxcjc" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680171 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d73a5142-56cf-4676-a6f1-a00868938c4d-registration-dir\") pod \"csi-hostpathplugin-5kfwr\" (UID: \"d73a5142-56cf-4676-a6f1-a00868938c4d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680208 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/84d11c6f-169b-4e21-87ec-8bb8930a1831-tmpfs\") pod \"packageserver-d55dfcdfc-f7fgc\" (UID: \"84d11c6f-169b-4e21-87ec-8bb8930a1831\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680231 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/330d3fd9-790f-406d-a122-152a1ab07e5c-etcd-client\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680265 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6f56211-548f-4d20-9c0a-70108a8f557b-config\") pod \"kube-controller-manager-operator-78b949d7b-8gxz5\" (UID: \"f6f56211-548f-4d20-9c0a-70108a8f557b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8gxz5" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680286 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/22ff872b-afc9-4fa7-812b-f47bb3add27c-images\") pod \"machine-config-operator-74547568cd-rltbk\" (UID: \"22ff872b-afc9-4fa7-812b-f47bb3add27c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680305 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/330d3fd9-790f-406d-a122-152a1ab07e5c-config\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680329 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj96k\" (UniqueName: \"kubernetes.io/projected/d2c4bb39-a442-4316-81a9-d5e8f9d10eaa-kube-api-access-mj96k\") pod \"console-operator-58897d9998-mjbd9\" (UID: \"d2c4bb39-a442-4316-81a9-d5e8f9d10eaa\") " pod="openshift-console-operator/console-operator-58897d9998-mjbd9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680350 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680372 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p6kd\" (UniqueName: \"kubernetes.io/projected/2435f2fb-db3e-4a5a-910d-b2bdebfff9ed-kube-api-access-8p6kd\") pod \"etcd-operator-b45778765-nzj54\" (UID: \"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680393 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac92922f-89ed-41e7-bf6f-9750efc9cab0-serving-cert\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680420 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8edb2bab-1e72-4b68-afed-2de0572a1071-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-c2zm8\" (UID: \"8edb2bab-1e72-4b68-afed-2de0572a1071\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2zm8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680442 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/47ac991e-3a26-4da1-9cf0-6f0944a3bf7b-stats-auth\") pod \"router-default-5444994796-87f9c\" (UID: \"47ac991e-3a26-4da1-9cf0-6f0944a3bf7b\") " pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680461 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/330d3fd9-790f-406d-a122-152a1ab07e5c-audit-dir\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680482 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-9zjqh\" (UID: \"74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680502 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctc8x\" (UniqueName: \"kubernetes.io/projected/47ac991e-3a26-4da1-9cf0-6f0944a3bf7b-kube-api-access-ctc8x\") pod \"router-default-5444994796-87f9c\" (UID: \"47ac991e-3a26-4da1-9cf0-6f0944a3bf7b\") " pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680522 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/330d3fd9-790f-406d-a122-152a1ab07e5c-etcd-serving-ca\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680545 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/329bb364-3958-490e-b065-d2ce7ee1567d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680570 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ac92922f-89ed-41e7-bf6f-9750efc9cab0-audit-policies\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680593 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dc5ca38-64fe-41f8-a989-0b035bf29414-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-j7kx4\" (UID: \"1dc5ca38-64fe-41f8-a989-0b035bf29414\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j7kx4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680632 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac92922f-89ed-41e7-bf6f-9750efc9cab0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680660 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqmrv\" (UniqueName: \"kubernetes.io/projected/31cdca6f-11b2-4888-9a4c-4b06a94d1863-kube-api-access-xqmrv\") pod \"openshift-apiserver-operator-796bbdcf4f-2bx5m\" (UID: \"31cdca6f-11b2-4888-9a4c-4b06a94d1863\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2bx5m" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680682 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d73a5142-56cf-4676-a6f1-a00868938c4d-mountpoint-dir\") pod \"csi-hostpathplugin-5kfwr\" (UID: \"d73a5142-56cf-4676-a6f1-a00868938c4d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680766 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac92922f-89ed-41e7-bf6f-9750efc9cab0-audit-dir\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.681860 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.683575 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2435f2fb-db3e-4a5a-910d-b2bdebfff9ed-etcd-service-ca\") pod \"etcd-operator-b45778765-nzj54\" (UID: \"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.683600 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31cdca6f-11b2-4888-9a4c-4b06a94d1863-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2bx5m\" (UID: \"31cdca6f-11b2-4888-9a4c-4b06a94d1863\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2bx5m" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.684605 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/275ea46d-7a78-4457-a5ba-7b3000170d0e-audit-dir\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.686889 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/330d3fd9-790f-406d-a122-152a1ab07e5c-config\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.687904 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/329bb364-3958-490e-b065-d2ce7ee1567d-registry-certificates\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.688098 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ac92922f-89ed-41e7-bf6f-9750efc9cab0-audit-policies\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.688201 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22ff872b-afc9-4fa7-812b-f47bb3add27c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rltbk\" (UID: \"22ff872b-afc9-4fa7-812b-f47bb3add27c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.688661 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-9zjqh\" (UID: \"74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.688786 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ac92922f-89ed-41e7-bf6f-9750efc9cab0-etcd-client\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.688844 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/229488e3-89a8-4eb4-841e-980db3f8cfb3-serving-cert\") pod \"openshift-config-operator-7777fb866f-5zjq4\" (UID: \"229488e3-89a8-4eb4-841e-980db3f8cfb3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.688874 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.688903 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/329bb364-3958-490e-b065-d2ce7ee1567d-trusted-ca\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.688943 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/330d3fd9-790f-406d-a122-152a1ab07e5c-serving-cert\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.688968 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.688993 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ac92922f-89ed-41e7-bf6f-9750efc9cab0-encryption-config\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.689018 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20c40a34-73d2-4a28-b2bd-31e19e6361d2-config-volume\") pod \"collect-profiles-29497845-zn989\" (UID: \"20c40a34-73d2-4a28-b2bd-31e19e6361d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.689166 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47ac991e-3a26-4da1-9cf0-6f0944a3bf7b-service-ca-bundle\") pod \"router-default-5444994796-87f9c\" (UID: \"47ac991e-3a26-4da1-9cf0-6f0944a3bf7b\") " pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.689233 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301a24de-a6b1-45a1-a12d-663325e45fd6-srv-cert\") pod \"catalog-operator-68c6474976-4fhtj\" (UID: \"301a24de-a6b1-45a1-a12d-663325e45fd6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.689268 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p9ph\" (UniqueName: \"kubernetes.io/projected/6f2d8117-c3e5-498f-8458-e72238d0f0ac-kube-api-access-8p9ph\") pod \"package-server-manager-789f6589d5-l8wxr\" (UID: \"6f2d8117-c3e5-498f-8458-e72238d0f0ac\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l8wxr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.689297 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/329bb364-3958-490e-b065-d2ce7ee1567d-bound-sa-token\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.689324 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09c58528-2088-4902-ab32-10cd90be0562-serving-cert\") pod \"service-ca-operator-777779d784-g5s5p\" (UID: \"09c58528-2088-4902-ab32-10cd90be0562\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5s5p" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.689352 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/330d3fd9-790f-406d-a122-152a1ab07e5c-audit\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.689394 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.689419 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a766d0cf-2406-4406-aaec-51a9da3d6b55-node-bootstrap-token\") pod \"machine-config-server-w9cb6\" (UID: \"a766d0cf-2406-4406-aaec-51a9da3d6b55\") " pod="openshift-machine-config-operator/machine-config-server-w9cb6" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.689442 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1113d5ad-40c9-412f-92c2-2fb0d6ec2903-config-volume\") pod \"dns-default-l8kn4\" (UID: \"1113d5ad-40c9-412f-92c2-2fb0d6ec2903\") " pod="openshift-dns/dns-default-l8kn4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.689466 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rbxj\" (UniqueName: \"kubernetes.io/projected/74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba-kube-api-access-9rbxj\") pod \"cluster-image-registry-operator-dc59b4c8b-9zjqh\" (UID: \"74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.689505 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.689528 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80834dac-7e21-4dda-8f32-3a19eced5753-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-852vg\" (UID: \"80834dac-7e21-4dda-8f32-3a19eced5753\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-852vg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.689563 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg645\" (UniqueName: \"kubernetes.io/projected/09c58528-2088-4902-ab32-10cd90be0562-kube-api-access-gg645\") pod \"service-ca-operator-777779d784-g5s5p\" (UID: \"09c58528-2088-4902-ab32-10cd90be0562\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5s5p" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.689584 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgpsn\" (UniqueName: \"kubernetes.io/projected/d73a5142-56cf-4676-a6f1-a00868938c4d-kube-api-access-mgpsn\") pod \"csi-hostpathplugin-5kfwr\" (UID: \"d73a5142-56cf-4676-a6f1-a00868938c4d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.689623 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.689647 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcqgf\" (UniqueName: \"kubernetes.io/projected/301a24de-a6b1-45a1-a12d-663325e45fd6-kube-api-access-wcqgf\") pod \"catalog-operator-68c6474976-4fhtj\" (UID: \"301a24de-a6b1-45a1-a12d-663325e45fd6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.689674 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d2c4bb39-a442-4316-81a9-d5e8f9d10eaa-trusted-ca\") pod \"console-operator-58897d9998-mjbd9\" (UID: \"d2c4bb39-a442-4316-81a9-d5e8f9d10eaa\") " pod="openshift-console-operator/console-operator-58897d9998-mjbd9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.689734 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp4j4\" (UniqueName: \"kubernetes.io/projected/e043d261-8774-411b-be6d-98dbb1f210a2-kube-api-access-hp4j4\") pod \"olm-operator-6b444d44fb-jtt2l\" (UID: \"e043d261-8774-411b-be6d-98dbb1f210a2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.690584 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/330d3fd9-790f-406d-a122-152a1ab07e5c-audit\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.692301 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2435f2fb-db3e-4a5a-910d-b2bdebfff9ed-serving-cert\") pod \"etcd-operator-b45778765-nzj54\" (UID: \"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.692509 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1b409a5-8274-478d-98bf-fe2171d90c63-serving-cert\") pod \"route-controller-manager-6576b87f9c-mpmpg\" (UID: \"e1b409a5-8274-478d-98bf-fe2171d90c63\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.693116 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-audit-policies\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.693134 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1b409a5-8274-478d-98bf-fe2171d90c63-config\") pod \"route-controller-manager-6576b87f9c-mpmpg\" (UID: \"e1b409a5-8274-478d-98bf-fe2171d90c63\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.694280 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/329bb364-3958-490e-b065-d2ce7ee1567d-trusted-ca\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.694760 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/330d3fd9-790f-406d-a122-152a1ab07e5c-etcd-serving-ca\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.695396 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2c4bb39-a442-4316-81a9-d5e8f9d10eaa-config\") pod \"console-operator-58897d9998-mjbd9\" (UID: \"d2c4bb39-a442-4316-81a9-d5e8f9d10eaa\") " pod="openshift-console-operator/console-operator-58897d9998-mjbd9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.696964 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/329bb364-3958-490e-b065-d2ce7ee1567d-registry-tls\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.698253 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2435f2fb-db3e-4a5a-910d-b2bdebfff9ed-config\") pod \"etcd-operator-b45778765-nzj54\" (UID: \"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.699075 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1b409a5-8274-478d-98bf-fe2171d90c63-client-ca\") pod \"route-controller-manager-6576b87f9c-mpmpg\" (UID: \"e1b409a5-8274-478d-98bf-fe2171d90c63\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.699330 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.699609 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/329bb364-3958-490e-b065-d2ce7ee1567d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.699639 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac92922f-89ed-41e7-bf6f-9750efc9cab0-audit-dir\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.699928 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8edb2bab-1e72-4b68-afed-2de0572a1071-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-c2zm8\" (UID: \"8edb2bab-1e72-4b68-afed-2de0572a1071\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2zm8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.700176 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/22ff872b-afc9-4fa7-812b-f47bb3add27c-proxy-tls\") pod \"machine-config-operator-74547568cd-rltbk\" (UID: \"22ff872b-afc9-4fa7-812b-f47bb3add27c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.700560 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2435f2fb-db3e-4a5a-910d-b2bdebfff9ed-etcd-client\") pod \"etcd-operator-b45778765-nzj54\" (UID: \"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.701198 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/47ac991e-3a26-4da1-9cf0-6f0944a3bf7b-metrics-certs\") pod \"router-default-5444994796-87f9c\" (UID: \"47ac991e-3a26-4da1-9cf0-6f0944a3bf7b\") " pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.701407 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/330d3fd9-790f-406d-a122-152a1ab07e5c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.701589 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.701810 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/330d3fd9-790f-406d-a122-152a1ab07e5c-encryption-config\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.701847 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/330d3fd9-790f-406d-a122-152a1ab07e5c-node-pullsecrets\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.701971 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8edb2bab-1e72-4b68-afed-2de0572a1071-config\") pod \"kube-apiserver-operator-766d6c64bb-c2zm8\" (UID: \"8edb2bab-1e72-4b68-afed-2de0572a1071\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2zm8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.702638 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.703719 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31cdca6f-11b2-4888-9a4c-4b06a94d1863-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2bx5m\" (UID: \"31cdca6f-11b2-4888-9a4c-4b06a94d1863\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2bx5m" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.704369 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.705643 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6f56211-548f-4d20-9c0a-70108a8f557b-config\") pod \"kube-controller-manager-operator-78b949d7b-8gxz5\" (UID: \"f6f56211-548f-4d20-9c0a-70108a8f557b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8gxz5" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.706105 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/330d3fd9-790f-406d-a122-152a1ab07e5c-audit-dir\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.706775 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: E0131 14:57:01.707151 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:02.20713732 +0000 UTC m=+141.961875613 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.707307 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.708119 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac92922f-89ed-41e7-bf6f-9750efc9cab0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.708269 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/22ff872b-afc9-4fa7-812b-f47bb3add27c-images\") pod \"machine-config-operator-74547568cd-rltbk\" (UID: \"22ff872b-afc9-4fa7-812b-f47bb3add27c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.709141 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac92922f-89ed-41e7-bf6f-9750efc9cab0-serving-cert\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.710284 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/329bb364-3958-490e-b065-d2ce7ee1567d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.710928 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.710995 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.711971 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ac92922f-89ed-41e7-bf6f-9750efc9cab0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.712404 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/229488e3-89a8-4eb4-841e-980db3f8cfb3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5zjq4\" (UID: \"229488e3-89a8-4eb4-841e-980db3f8cfb3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.713215 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d2c4bb39-a442-4316-81a9-d5e8f9d10eaa-trusted-ca\") pod \"console-operator-58897d9998-mjbd9\" (UID: \"d2c4bb39-a442-4316-81a9-d5e8f9d10eaa\") " pod="openshift-console-operator/console-operator-58897d9998-mjbd9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.713598 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/330d3fd9-790f-406d-a122-152a1ab07e5c-image-import-ca\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.714816 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/330d3fd9-790f-406d-a122-152a1ab07e5c-serving-cert\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.716498 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.718728 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/47ac991e-3a26-4da1-9cf0-6f0944a3bf7b-default-certificate\") pod \"router-default-5444994796-87f9c\" (UID: \"47ac991e-3a26-4da1-9cf0-6f0944a3bf7b\") " pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.718944 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.719793 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ac92922f-89ed-41e7-bf6f-9750efc9cab0-etcd-client\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.719869 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6f56211-548f-4d20-9c0a-70108a8f557b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8gxz5\" (UID: \"f6f56211-548f-4d20-9c0a-70108a8f557b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8gxz5" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.719977 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ac92922f-89ed-41e7-bf6f-9750efc9cab0-encryption-config\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.720250 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/330d3fd9-790f-406d-a122-152a1ab07e5c-etcd-client\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.724566 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2c4bb39-a442-4316-81a9-d5e8f9d10eaa-serving-cert\") pod \"console-operator-58897d9998-mjbd9\" (UID: \"d2c4bb39-a442-4316-81a9-d5e8f9d10eaa\") " pod="openshift-console-operator/console-operator-58897d9998-mjbd9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.737624 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/229488e3-89a8-4eb4-841e-980db3f8cfb3-serving-cert\") pod \"openshift-config-operator-7777fb866f-5zjq4\" (UID: \"229488e3-89a8-4eb4-841e-980db3f8cfb3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.740824 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2df2f\" (UniqueName: \"kubernetes.io/projected/329bb364-3958-490e-b065-d2ce7ee1567d-kube-api-access-2df2f\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.749827 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/47ac991e-3a26-4da1-9cf0-6f0944a3bf7b-stats-auth\") pod \"router-default-5444994796-87f9c\" (UID: \"47ac991e-3a26-4da1-9cf0-6f0944a3bf7b\") " pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.750113 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkmj5\" (UniqueName: \"kubernetes.io/projected/22ff872b-afc9-4fa7-812b-f47bb3add27c-kube-api-access-bkmj5\") pod \"machine-config-operator-74547568cd-rltbk\" (UID: \"22ff872b-afc9-4fa7-812b-f47bb3add27c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.779507 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj96k\" (UniqueName: \"kubernetes.io/projected/d2c4bb39-a442-4316-81a9-d5e8f9d10eaa-kube-api-access-mj96k\") pod \"console-operator-58897d9998-mjbd9\" (UID: \"d2c4bb39-a442-4316-81a9-d5e8f9d10eaa\") " pod="openshift-console-operator/console-operator-58897d9998-mjbd9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790423 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790592 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gq7w\" (UniqueName: \"kubernetes.io/projected/20c40a34-73d2-4a28-b2bd-31e19e6361d2-kube-api-access-4gq7w\") pod \"collect-profiles-29497845-zn989\" (UID: \"20c40a34-73d2-4a28-b2bd-31e19e6361d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790628 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1113d5ad-40c9-412f-92c2-2fb0d6ec2903-metrics-tls\") pod \"dns-default-l8kn4\" (UID: \"1113d5ad-40c9-412f-92c2-2fb0d6ec2903\") " pod="openshift-dns/dns-default-l8kn4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790651 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfdsx\" (UniqueName: \"kubernetes.io/projected/c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc-kube-api-access-sfdsx\") pod \"marketplace-operator-79b997595-flcgf\" (UID: \"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc\") " pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790687 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/84d11c6f-169b-4e21-87ec-8bb8930a1831-tmpfs\") pod \"packageserver-d55dfcdfc-f7fgc\" (UID: \"84d11c6f-169b-4e21-87ec-8bb8930a1831\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790715 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e6758143-5085-416e-9bdc-856a520c71de-signing-cabundle\") pod \"service-ca-9c57cc56f-gxcjc\" (UID: \"e6758143-5085-416e-9bdc-856a520c71de\") " pod="openshift-service-ca/service-ca-9c57cc56f-gxcjc" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790730 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d73a5142-56cf-4676-a6f1-a00868938c4d-registration-dir\") pod \"csi-hostpathplugin-5kfwr\" (UID: \"d73a5142-56cf-4676-a6f1-a00868938c4d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790761 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-9zjqh\" (UID: \"74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790784 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dc5ca38-64fe-41f8-a989-0b035bf29414-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-j7kx4\" (UID: \"1dc5ca38-64fe-41f8-a989-0b035bf29414\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j7kx4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790805 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d73a5142-56cf-4676-a6f1-a00868938c4d-mountpoint-dir\") pod \"csi-hostpathplugin-5kfwr\" (UID: \"d73a5142-56cf-4676-a6f1-a00868938c4d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790826 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-9zjqh\" (UID: \"74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790848 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20c40a34-73d2-4a28-b2bd-31e19e6361d2-config-volume\") pod \"collect-profiles-29497845-zn989\" (UID: \"20c40a34-73d2-4a28-b2bd-31e19e6361d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790877 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301a24de-a6b1-45a1-a12d-663325e45fd6-srv-cert\") pod \"catalog-operator-68c6474976-4fhtj\" (UID: \"301a24de-a6b1-45a1-a12d-663325e45fd6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790892 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p9ph\" (UniqueName: \"kubernetes.io/projected/6f2d8117-c3e5-498f-8458-e72238d0f0ac-kube-api-access-8p9ph\") pod \"package-server-manager-789f6589d5-l8wxr\" (UID: \"6f2d8117-c3e5-498f-8458-e72238d0f0ac\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l8wxr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790908 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09c58528-2088-4902-ab32-10cd90be0562-serving-cert\") pod \"service-ca-operator-777779d784-g5s5p\" (UID: \"09c58528-2088-4902-ab32-10cd90be0562\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5s5p" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790927 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a766d0cf-2406-4406-aaec-51a9da3d6b55-node-bootstrap-token\") pod \"machine-config-server-w9cb6\" (UID: \"a766d0cf-2406-4406-aaec-51a9da3d6b55\") " pod="openshift-machine-config-operator/machine-config-server-w9cb6" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790941 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1113d5ad-40c9-412f-92c2-2fb0d6ec2903-config-volume\") pod \"dns-default-l8kn4\" (UID: \"1113d5ad-40c9-412f-92c2-2fb0d6ec2903\") " pod="openshift-dns/dns-default-l8kn4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790957 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rbxj\" (UniqueName: \"kubernetes.io/projected/74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba-kube-api-access-9rbxj\") pod \"cluster-image-registry-operator-dc59b4c8b-9zjqh\" (UID: \"74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790972 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80834dac-7e21-4dda-8f32-3a19eced5753-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-852vg\" (UID: \"80834dac-7e21-4dda-8f32-3a19eced5753\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-852vg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790987 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg645\" (UniqueName: \"kubernetes.io/projected/09c58528-2088-4902-ab32-10cd90be0562-kube-api-access-gg645\") pod \"service-ca-operator-777779d784-g5s5p\" (UID: \"09c58528-2088-4902-ab32-10cd90be0562\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5s5p" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791002 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgpsn\" (UniqueName: \"kubernetes.io/projected/d73a5142-56cf-4676-a6f1-a00868938c4d-kube-api-access-mgpsn\") pod \"csi-hostpathplugin-5kfwr\" (UID: \"d73a5142-56cf-4676-a6f1-a00868938c4d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791017 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcqgf\" (UniqueName: \"kubernetes.io/projected/301a24de-a6b1-45a1-a12d-663325e45fd6-kube-api-access-wcqgf\") pod \"catalog-operator-68c6474976-4fhtj\" (UID: \"301a24de-a6b1-45a1-a12d-663325e45fd6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791036 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp4j4\" (UniqueName: \"kubernetes.io/projected/e043d261-8774-411b-be6d-98dbb1f210a2-kube-api-access-hp4j4\") pod \"olm-operator-6b444d44fb-jtt2l\" (UID: \"e043d261-8774-411b-be6d-98dbb1f210a2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791054 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krzb2\" (UniqueName: \"kubernetes.io/projected/1113d5ad-40c9-412f-92c2-2fb0d6ec2903-kube-api-access-krzb2\") pod \"dns-default-l8kn4\" (UID: \"1113d5ad-40c9-412f-92c2-2fb0d6ec2903\") " pod="openshift-dns/dns-default-l8kn4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791068 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e043d261-8774-411b-be6d-98dbb1f210a2-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jtt2l\" (UID: \"e043d261-8774-411b-be6d-98dbb1f210a2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791085 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbsgk\" (UniqueName: \"kubernetes.io/projected/92ef1804-52cd-46a1-86e1-baf561981f8b-kube-api-access-zbsgk\") pod \"ingress-canary-5q274\" (UID: \"92ef1804-52cd-46a1-86e1-baf561981f8b\") " pod="openshift-ingress-canary/ingress-canary-5q274" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791101 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20c40a34-73d2-4a28-b2bd-31e19e6361d2-secret-volume\") pod \"collect-profiles-29497845-zn989\" (UID: \"20c40a34-73d2-4a28-b2bd-31e19e6361d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791117 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301a24de-a6b1-45a1-a12d-663325e45fd6-profile-collector-cert\") pod \"catalog-operator-68c6474976-4fhtj\" (UID: \"301a24de-a6b1-45a1-a12d-663325e45fd6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791133 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f313ecec-c631-4270-a297-51e482e3e306-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-z8wss\" (UID: \"f313ecec-c631-4270-a297-51e482e3e306\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z8wss" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791149 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1dc5ca38-64fe-41f8-a989-0b035bf29414-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-j7kx4\" (UID: \"1dc5ca38-64fe-41f8-a989-0b035bf29414\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j7kx4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791166 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a7826828-7856-44a4-be9f-f1a939950c3e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dncp9\" (UID: \"a7826828-7856-44a4-be9f-f1a939950c3e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dncp9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791185 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-flcgf\" (UID: \"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc\") " pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791200 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/92ef1804-52cd-46a1-86e1-baf561981f8b-cert\") pod \"ingress-canary-5q274\" (UID: \"92ef1804-52cd-46a1-86e1-baf561981f8b\") " pod="openshift-ingress-canary/ingress-canary-5q274" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791217 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtqq\" (UniqueName: \"kubernetes.io/projected/80834dac-7e21-4dda-8f32-3a19eced5753-kube-api-access-vvtqq\") pod \"multus-admission-controller-857f4d67dd-852vg\" (UID: \"80834dac-7e21-4dda-8f32-3a19eced5753\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-852vg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791235 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-flcgf\" (UID: \"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc\") " pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791259 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6f2d8117-c3e5-498f-8458-e72238d0f0ac-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-l8wxr\" (UID: \"6f2d8117-c3e5-498f-8458-e72238d0f0ac\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l8wxr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791274 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a766d0cf-2406-4406-aaec-51a9da3d6b55-certs\") pod \"machine-config-server-w9cb6\" (UID: \"a766d0cf-2406-4406-aaec-51a9da3d6b55\") " pod="openshift-machine-config-operator/machine-config-server-w9cb6" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791297 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e043d261-8774-411b-be6d-98dbb1f210a2-srv-cert\") pod \"olm-operator-6b444d44fb-jtt2l\" (UID: \"e043d261-8774-411b-be6d-98dbb1f210a2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791314 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/84d11c6f-169b-4e21-87ec-8bb8930a1831-apiservice-cert\") pod \"packageserver-d55dfcdfc-f7fgc\" (UID: \"84d11c6f-169b-4e21-87ec-8bb8930a1831\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791331 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e6758143-5085-416e-9bdc-856a520c71de-signing-key\") pod \"service-ca-9c57cc56f-gxcjc\" (UID: \"e6758143-5085-416e-9bdc-856a520c71de\") " pod="openshift-service-ca/service-ca-9c57cc56f-gxcjc" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791348 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swj8t\" (UniqueName: \"kubernetes.io/projected/e6758143-5085-416e-9bdc-856a520c71de-kube-api-access-swj8t\") pod \"service-ca-9c57cc56f-gxcjc\" (UID: \"e6758143-5085-416e-9bdc-856a520c71de\") " pod="openshift-service-ca/service-ca-9c57cc56f-gxcjc" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791379 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/84d11c6f-169b-4e21-87ec-8bb8930a1831-webhook-cert\") pod \"packageserver-d55dfcdfc-f7fgc\" (UID: \"84d11c6f-169b-4e21-87ec-8bb8930a1831\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791395 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09c58528-2088-4902-ab32-10cd90be0562-config\") pod \"service-ca-operator-777779d784-g5s5p\" (UID: \"09c58528-2088-4902-ab32-10cd90be0562\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5s5p" Jan 31 14:57:01 crc kubenswrapper[4763]: E0131 14:57:01.792557 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:02.292521295 +0000 UTC m=+142.047259588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791442 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dc5ca38-64fe-41f8-a989-0b035bf29414-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-j7kx4\" (UID: \"1dc5ca38-64fe-41f8-a989-0b035bf29414\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j7kx4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.793519 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjksc\" (UniqueName: \"kubernetes.io/projected/f313ecec-c631-4270-a297-51e482e3e306-kube-api-access-zjksc\") pod \"kube-storage-version-migrator-operator-b67b599dd-z8wss\" (UID: \"f313ecec-c631-4270-a297-51e482e3e306\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z8wss" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.793567 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2rjv\" (UniqueName: \"kubernetes.io/projected/a766d0cf-2406-4406-aaec-51a9da3d6b55-kube-api-access-x2rjv\") pod \"machine-config-server-w9cb6\" (UID: \"a766d0cf-2406-4406-aaec-51a9da3d6b55\") " pod="openshift-machine-config-operator/machine-config-server-w9cb6" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.793622 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-665h2\" (UniqueName: \"kubernetes.io/projected/84d11c6f-169b-4e21-87ec-8bb8930a1831-kube-api-access-665h2\") pod \"packageserver-d55dfcdfc-f7fgc\" (UID: \"84d11c6f-169b-4e21-87ec-8bb8930a1831\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.793652 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6jtr\" (UniqueName: \"kubernetes.io/projected/a7826828-7856-44a4-be9f-f1a939950c3e-kube-api-access-p6jtr\") pod \"control-plane-machine-set-operator-78cbb6b69f-dncp9\" (UID: \"a7826828-7856-44a4-be9f-f1a939950c3e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dncp9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.793686 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d73a5142-56cf-4676-a6f1-a00868938c4d-plugins-dir\") pod \"csi-hostpathplugin-5kfwr\" (UID: \"d73a5142-56cf-4676-a6f1-a00868938c4d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.793762 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f313ecec-c631-4270-a297-51e482e3e306-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-z8wss\" (UID: \"f313ecec-c631-4270-a297-51e482e3e306\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z8wss" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.793790 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d73a5142-56cf-4676-a6f1-a00868938c4d-socket-dir\") pod \"csi-hostpathplugin-5kfwr\" (UID: \"d73a5142-56cf-4676-a6f1-a00868938c4d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.793825 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d73a5142-56cf-4676-a6f1-a00868938c4d-csi-data-dir\") pod \"csi-hostpathplugin-5kfwr\" (UID: \"d73a5142-56cf-4676-a6f1-a00868938c4d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.793850 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-9zjqh\" (UID: \"74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.794346 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d73a5142-56cf-4676-a6f1-a00868938c4d-plugins-dir\") pod \"csi-hostpathplugin-5kfwr\" (UID: \"d73a5142-56cf-4676-a6f1-a00868938c4d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.795095 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20c40a34-73d2-4a28-b2bd-31e19e6361d2-config-volume\") pod \"collect-profiles-29497845-zn989\" (UID: \"20c40a34-73d2-4a28-b2bd-31e19e6361d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.795498 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-flcgf\" (UID: \"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc\") " pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.795987 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/84d11c6f-169b-4e21-87ec-8bb8930a1831-tmpfs\") pod \"packageserver-d55dfcdfc-f7fgc\" (UID: \"84d11c6f-169b-4e21-87ec-8bb8930a1831\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.796748 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20c40a34-73d2-4a28-b2bd-31e19e6361d2-secret-volume\") pod \"collect-profiles-29497845-zn989\" (UID: \"20c40a34-73d2-4a28-b2bd-31e19e6361d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.796875 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f313ecec-c631-4270-a297-51e482e3e306-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-z8wss\" (UID: \"f313ecec-c631-4270-a297-51e482e3e306\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z8wss" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.797309 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09c58528-2088-4902-ab32-10cd90be0562-config\") pod \"service-ca-operator-777779d784-g5s5p\" (UID: \"09c58528-2088-4902-ab32-10cd90be0562\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5s5p" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.797621 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d73a5142-56cf-4676-a6f1-a00868938c4d-socket-dir\") pod \"csi-hostpathplugin-5kfwr\" (UID: \"d73a5142-56cf-4676-a6f1-a00868938c4d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.797724 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d73a5142-56cf-4676-a6f1-a00868938c4d-csi-data-dir\") pod \"csi-hostpathplugin-5kfwr\" (UID: \"d73a5142-56cf-4676-a6f1-a00868938c4d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.797874 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e6758143-5085-416e-9bdc-856a520c71de-signing-cabundle\") pod \"service-ca-9c57cc56f-gxcjc\" (UID: \"e6758143-5085-416e-9bdc-856a520c71de\") " pod="openshift-service-ca/service-ca-9c57cc56f-gxcjc" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.797963 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d73a5142-56cf-4676-a6f1-a00868938c4d-registration-dir\") pod \"csi-hostpathplugin-5kfwr\" (UID: \"d73a5142-56cf-4676-a6f1-a00868938c4d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.798264 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1113d5ad-40c9-412f-92c2-2fb0d6ec2903-config-volume\") pod \"dns-default-l8kn4\" (UID: \"1113d5ad-40c9-412f-92c2-2fb0d6ec2903\") " pod="openshift-dns/dns-default-l8kn4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.799096 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d73a5142-56cf-4676-a6f1-a00868938c4d-mountpoint-dir\") pod \"csi-hostpathplugin-5kfwr\" (UID: \"d73a5142-56cf-4676-a6f1-a00868938c4d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.799381 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-9zjqh\" (UID: \"74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.799978 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dc5ca38-64fe-41f8-a989-0b035bf29414-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-j7kx4\" (UID: \"1dc5ca38-64fe-41f8-a989-0b035bf29414\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j7kx4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.800628 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/84d11c6f-169b-4e21-87ec-8bb8930a1831-apiservice-cert\") pod \"packageserver-d55dfcdfc-f7fgc\" (UID: \"84d11c6f-169b-4e21-87ec-8bb8930a1831\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.801413 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e043d261-8774-411b-be6d-98dbb1f210a2-srv-cert\") pod \"olm-operator-6b444d44fb-jtt2l\" (UID: \"e043d261-8774-411b-be6d-98dbb1f210a2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.803136 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/329bb364-3958-490e-b065-d2ce7ee1567d-bound-sa-token\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.803179 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e043d261-8774-411b-be6d-98dbb1f210a2-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jtt2l\" (UID: \"e043d261-8774-411b-be6d-98dbb1f210a2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.803308 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a766d0cf-2406-4406-aaec-51a9da3d6b55-node-bootstrap-token\") pod \"machine-config-server-w9cb6\" (UID: \"a766d0cf-2406-4406-aaec-51a9da3d6b55\") " pod="openshift-machine-config-operator/machine-config-server-w9cb6" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.803350 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301a24de-a6b1-45a1-a12d-663325e45fd6-profile-collector-cert\") pod \"catalog-operator-68c6474976-4fhtj\" (UID: \"301a24de-a6b1-45a1-a12d-663325e45fd6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.803630 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dc5ca38-64fe-41f8-a989-0b035bf29414-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-j7kx4\" (UID: \"1dc5ca38-64fe-41f8-a989-0b035bf29414\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j7kx4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.803684 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e6758143-5085-416e-9bdc-856a520c71de-signing-key\") pod \"service-ca-9c57cc56f-gxcjc\" (UID: \"e6758143-5085-416e-9bdc-856a520c71de\") " pod="openshift-service-ca/service-ca-9c57cc56f-gxcjc" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.803837 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-flcgf\" (UID: \"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc\") " pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.803855 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1113d5ad-40c9-412f-92c2-2fb0d6ec2903-metrics-tls\") pod \"dns-default-l8kn4\" (UID: \"1113d5ad-40c9-412f-92c2-2fb0d6ec2903\") " pod="openshift-dns/dns-default-l8kn4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.804477 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f313ecec-c631-4270-a297-51e482e3e306-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-z8wss\" (UID: \"f313ecec-c631-4270-a297-51e482e3e306\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z8wss" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.804834 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09c58528-2088-4902-ab32-10cd90be0562-serving-cert\") pod \"service-ca-operator-777779d784-g5s5p\" (UID: \"09c58528-2088-4902-ab32-10cd90be0562\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5s5p" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.804878 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-9zjqh\" (UID: \"74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.804933 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/92ef1804-52cd-46a1-86e1-baf561981f8b-cert\") pod \"ingress-canary-5q274\" (UID: \"92ef1804-52cd-46a1-86e1-baf561981f8b\") " pod="openshift-ingress-canary/ingress-canary-5q274" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.805294 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301a24de-a6b1-45a1-a12d-663325e45fd6-srv-cert\") pod \"catalog-operator-68c6474976-4fhtj\" (UID: \"301a24de-a6b1-45a1-a12d-663325e45fd6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.806338 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a766d0cf-2406-4406-aaec-51a9da3d6b55-certs\") pod \"machine-config-server-w9cb6\" (UID: \"a766d0cf-2406-4406-aaec-51a9da3d6b55\") " pod="openshift-machine-config-operator/machine-config-server-w9cb6" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.806771 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6f2d8117-c3e5-498f-8458-e72238d0f0ac-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-l8wxr\" (UID: \"6f2d8117-c3e5-498f-8458-e72238d0f0ac\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l8wxr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.806938 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80834dac-7e21-4dda-8f32-3a19eced5753-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-852vg\" (UID: \"80834dac-7e21-4dda-8f32-3a19eced5753\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-852vg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.807003 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/84d11c6f-169b-4e21-87ec-8bb8930a1831-webhook-cert\") pod \"packageserver-d55dfcdfc-f7fgc\" (UID: \"84d11c6f-169b-4e21-87ec-8bb8930a1831\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.811855 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zznr9"] Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.812090 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a7826828-7856-44a4-be9f-f1a939950c3e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dncp9\" (UID: \"a7826828-7856-44a4-be9f-f1a939950c3e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dncp9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.820551 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zd45q" event={"ID":"96ce635a-c905-4317-9f6d-64e1437d95c2","Type":"ContainerStarted","Data":"63cb66984f9c0a300f5da8745509ae18241a5d125ff6b08817258f5e539b6bf5"} Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.820608 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zd45q" event={"ID":"96ce635a-c905-4317-9f6d-64e1437d95c2","Type":"ContainerStarted","Data":"6c08a39d0c6c3c6b1f6ca8054506acc7fdd081a33e2bb00f17fe269e7c284842"} Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.820757 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7cfv\" (UniqueName: \"kubernetes.io/projected/e1b409a5-8274-478d-98bf-fe2171d90c63-kube-api-access-v7cfv\") pod \"route-controller-manager-6576b87f9c-mpmpg\" (UID: \"e1b409a5-8274-478d-98bf-fe2171d90c63\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.822732 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-9lvgt"] Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.824263 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" event={"ID":"b67dedfb-accc-467d-a3bb-508eab4f88c8","Type":"ContainerStarted","Data":"b8fb7a2f200b2701b2184c4c721af8933812270f9a51bf230ed65be2f7cd5bd1"} Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.824304 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" event={"ID":"b67dedfb-accc-467d-a3bb-508eab4f88c8","Type":"ContainerStarted","Data":"5825be7f5b0a2372a0714ff20d5e467974a43da635470237d607739086eb1094"} Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.824925 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.827024 4763 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-bpxtg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.827731 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" podUID="b67dedfb-accc-467d-a3bb-508eab4f88c8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.828064 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-mjbd9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.835650 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctc8x\" (UniqueName: \"kubernetes.io/projected/47ac991e-3a26-4da1-9cf0-6f0944a3bf7b-kube-api-access-ctc8x\") pod \"router-default-5444994796-87f9c\" (UID: \"47ac991e-3a26-4da1-9cf0-6f0944a3bf7b\") " pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.838612 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" event={"ID":"954567bc-27c1-40c6-8fa3-8f653f90c199","Type":"ContainerStarted","Data":"0a0295e9478c783cc5c7eae8b9d1e576728bdcf28c9b61ba5e36bd581149c3aa"} Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.838645 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" event={"ID":"954567bc-27c1-40c6-8fa3-8f653f90c199","Type":"ContainerStarted","Data":"da9911c212d0087ba033bb74fab71ce73c49d040572cf055075f1e14bb37f205"} Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.841171 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bwc2g" event={"ID":"5d9ac26c-eb66-4772-b7ee-a6b646092c4b","Type":"ContainerStarted","Data":"48eccc93a4902e5dbeedb0e4b6a546c340cb4fa2ad77c0a08618d843e1fa5198"} Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.859383 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqmrv\" (UniqueName: \"kubernetes.io/projected/31cdca6f-11b2-4888-9a4c-4b06a94d1863-kube-api-access-xqmrv\") pod \"openshift-apiserver-operator-796bbdcf4f-2bx5m\" (UID: \"31cdca6f-11b2-4888-9a4c-4b06a94d1863\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2bx5m" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.880241 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llk9p\" (UniqueName: \"kubernetes.io/projected/275ea46d-7a78-4457-a5ba-7b3000170d0e-kube-api-access-llk9p\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.882394 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.893791 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgrjn\" (UniqueName: \"kubernetes.io/projected/330d3fd9-790f-406d-a122-152a1ab07e5c-kube-api-access-mgrjn\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.895102 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: E0131 14:57:01.896029 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:02.396012456 +0000 UTC m=+142.150750749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:01 crc kubenswrapper[4763]: W0131 14:57:01.916189 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47ac991e_3a26_4da1_9cf0_6f0944a3bf7b.slice/crio-7deed9c9f509572e5a31c37b70b7fba519cc4fd85f5536f01a7eb0069d39f894 WatchSource:0}: Error finding container 7deed9c9f509572e5a31c37b70b7fba519cc4fd85f5536f01a7eb0069d39f894: Status 404 returned error can't find the container with id 7deed9c9f509572e5a31c37b70b7fba519cc4fd85f5536f01a7eb0069d39f894 Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.918414 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p6kd\" (UniqueName: \"kubernetes.io/projected/2435f2fb-db3e-4a5a-910d-b2bdebfff9ed-kube-api-access-8p6kd\") pod \"etcd-operator-b45778765-nzj54\" (UID: \"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.925996 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.931981 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.944327 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6f56211-548f-4d20-9c0a-70108a8f557b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8gxz5\" (UID: \"f6f56211-548f-4d20-9c0a-70108a8f557b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8gxz5" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.960889 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-75x7z"] Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.961027 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8edb2bab-1e72-4b68-afed-2de0572a1071-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-c2zm8\" (UID: \"8edb2bab-1e72-4b68-afed-2de0572a1071\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2zm8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.976370 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjb6g\" (UniqueName: \"kubernetes.io/projected/229488e3-89a8-4eb4-841e-980db3f8cfb3-kube-api-access-cjb6g\") pod \"openshift-config-operator-7777fb866f-5zjq4\" (UID: \"229488e3-89a8-4eb4-841e-980db3f8cfb3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.995630 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44m57\" (UniqueName: \"kubernetes.io/projected/ac92922f-89ed-41e7-bf6f-9750efc9cab0-kube-api-access-44m57\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.996816 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.997139 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" Jan 31 14:57:02 crc kubenswrapper[4763]: E0131 14:57:02.000624 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:02.500599652 +0000 UTC m=+142.255337965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.005875 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.044315 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcqgf\" (UniqueName: \"kubernetes.io/projected/301a24de-a6b1-45a1-a12d-663325e45fd6-kube-api-access-wcqgf\") pod \"catalog-operator-68c6474976-4fhtj\" (UID: \"301a24de-a6b1-45a1-a12d-663325e45fd6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.048431 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.055607 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-jj6qz"] Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.062401 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp4j4\" (UniqueName: \"kubernetes.io/projected/e043d261-8774-411b-be6d-98dbb1f210a2-kube-api-access-hp4j4\") pod \"olm-operator-6b444d44fb-jtt2l\" (UID: \"e043d261-8774-411b-be6d-98dbb1f210a2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.064830 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mjbd9"] Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.078572 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krzb2\" (UniqueName: \"kubernetes.io/projected/1113d5ad-40c9-412f-92c2-2fb0d6ec2903-kube-api-access-krzb2\") pod \"dns-default-l8kn4\" (UID: \"1113d5ad-40c9-412f-92c2-2fb0d6ec2903\") " pod="openshift-dns/dns-default-l8kn4" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.081543 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.099311 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:02 crc kubenswrapper[4763]: E0131 14:57:02.099818 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:02.599789959 +0000 UTC m=+142.354528252 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.104385 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.105265 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgpsn\" (UniqueName: \"kubernetes.io/projected/d73a5142-56cf-4676-a6f1-a00868938c4d-kube-api-access-mgpsn\") pod \"csi-hostpathplugin-5kfwr\" (UID: \"d73a5142-56cf-4676-a6f1-a00868938c4d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.126922 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qpplg"] Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.130555 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gq7w\" (UniqueName: \"kubernetes.io/projected/20c40a34-73d2-4a28-b2bd-31e19e6361d2-kube-api-access-4gq7w\") pod \"collect-profiles-29497845-zn989\" (UID: \"20c40a34-73d2-4a28-b2bd-31e19e6361d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.135173 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2bx5m" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.136152 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75"] Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.137591 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvtqq\" (UniqueName: \"kubernetes.io/projected/80834dac-7e21-4dda-8f32-3a19eced5753-kube-api-access-vvtqq\") pod \"multus-admission-controller-857f4d67dd-852vg\" (UID: \"80834dac-7e21-4dda-8f32-3a19eced5753\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-852vg" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.149399 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2jtkm"] Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.149446 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-bh727"] Jan 31 14:57:02 crc kubenswrapper[4763]: W0131 14:57:02.165071 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c48bb3f_235a_4dcd_ba1a_62f85f8946ac.slice/crio-80c11ffb6b6b7f3ace175bbbc27fa7d0fc3da7724863e9fa5d6a7e1ff907022d WatchSource:0}: Error finding container 80c11ffb6b6b7f3ace175bbbc27fa7d0fc3da7724863e9fa5d6a7e1ff907022d: Status 404 returned error can't find the container with id 80c11ffb6b6b7f3ace175bbbc27fa7d0fc3da7724863e9fa5d6a7e1ff907022d Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.178076 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-9zjqh\" (UID: \"74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.180612 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbsgk\" (UniqueName: \"kubernetes.io/projected/92ef1804-52cd-46a1-86e1-baf561981f8b-kube-api-access-zbsgk\") pod \"ingress-canary-5q274\" (UID: \"92ef1804-52cd-46a1-86e1-baf561981f8b\") " pod="openshift-ingress-canary/ingress-canary-5q274" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.203284 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:02 crc kubenswrapper[4763]: E0131 14:57:02.203546 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:02.703489236 +0000 UTC m=+142.458227529 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.204055 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.204219 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjksc\" (UniqueName: \"kubernetes.io/projected/f313ecec-c631-4270-a297-51e482e3e306-kube-api-access-zjksc\") pod \"kube-storage-version-migrator-operator-b67b599dd-z8wss\" (UID: \"f313ecec-c631-4270-a297-51e482e3e306\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z8wss" Jan 31 14:57:02 crc kubenswrapper[4763]: E0131 14:57:02.205268 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:02.705256441 +0000 UTC m=+142.459994824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.214505 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8gxz5" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.221136 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk"] Jan 31 14:57:02 crc kubenswrapper[4763]: W0131 14:57:02.228667 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb78095b_d026_498f_9616_d8365161f809.slice/crio-15d3a12853d42ed33dc8bb430786b504cf974661a23c8706c5f9a3c19fe8f152 WatchSource:0}: Error finding container 15d3a12853d42ed33dc8bb430786b504cf974661a23c8706c5f9a3c19fe8f152: Status 404 returned error can't find the container with id 15d3a12853d42ed33dc8bb430786b504cf974661a23c8706c5f9a3c19fe8f152 Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.232319 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfdsx\" (UniqueName: \"kubernetes.io/projected/c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc-kube-api-access-sfdsx\") pod \"marketplace-operator-79b997595-flcgf\" (UID: \"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc\") " pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" Jan 31 14:57:02 crc kubenswrapper[4763]: W0131 14:57:02.232759 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65c4b2d3_8915_480e_abf5_3b3e0184f778.slice/crio-d8872e708ab881cc8856cf37a1b15fe5412e07d9e31d6719c2e889aa0b19dd4a WatchSource:0}: Error finding container d8872e708ab881cc8856cf37a1b15fe5412e07d9e31d6719c2e889aa0b19dd4a: Status 404 returned error can't find the container with id d8872e708ab881cc8856cf37a1b15fe5412e07d9e31d6719c2e889aa0b19dd4a Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.236304 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swj8t\" (UniqueName: \"kubernetes.io/projected/e6758143-5085-416e-9bdc-856a520c71de-kube-api-access-swj8t\") pod \"service-ca-9c57cc56f-gxcjc\" (UID: \"e6758143-5085-416e-9bdc-856a520c71de\") " pod="openshift-service-ca/service-ca-9c57cc56f-gxcjc" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.242630 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2zm8" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.271950 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-852vg" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.272259 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nzj54"] Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.272866 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2rjv\" (UniqueName: \"kubernetes.io/projected/a766d0cf-2406-4406-aaec-51a9da3d6b55-kube-api-access-x2rjv\") pod \"machine-config-server-w9cb6\" (UID: \"a766d0cf-2406-4406-aaec-51a9da3d6b55\") " pod="openshift-machine-config-operator/machine-config-server-w9cb6" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.275891 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.284536 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-665h2\" (UniqueName: \"kubernetes.io/projected/84d11c6f-169b-4e21-87ec-8bb8930a1831-kube-api-access-665h2\") pod \"packageserver-d55dfcdfc-f7fgc\" (UID: \"84d11c6f-169b-4e21-87ec-8bb8930a1831\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" Jan 31 14:57:02 crc kubenswrapper[4763]: W0131 14:57:02.288202 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22ff872b_afc9_4fa7_812b_f47bb3add27c.slice/crio-2f94de76d368d97664c9676ded0277b97294b97bea27989e794d51fc6c81543e WatchSource:0}: Error finding container 2f94de76d368d97664c9676ded0277b97294b97bea27989e794d51fc6c81543e: Status 404 returned error can't find the container with id 2f94de76d368d97664c9676ded0277b97294b97bea27989e794d51fc6c81543e Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.295322 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.296468 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6jtr\" (UniqueName: \"kubernetes.io/projected/a7826828-7856-44a4-be9f-f1a939950c3e-kube-api-access-p6jtr\") pod \"control-plane-machine-set-operator-78cbb6b69f-dncp9\" (UID: \"a7826828-7856-44a4-be9f-f1a939950c3e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dncp9" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.302079 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dncp9" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.306329 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.306630 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:02 crc kubenswrapper[4763]: E0131 14:57:02.306957 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:02.806941097 +0000 UTC m=+142.561679390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.312275 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.314318 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1dc5ca38-64fe-41f8-a989-0b035bf29414-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-j7kx4\" (UID: \"1dc5ca38-64fe-41f8-a989-0b035bf29414\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j7kx4" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.319457 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-gxcjc" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.325858 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z8wss" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.331717 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.337123 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rbxj\" (UniqueName: \"kubernetes.io/projected/74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba-kube-api-access-9rbxj\") pod \"cluster-image-registry-operator-dc59b4c8b-9zjqh\" (UID: \"74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.337676 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j7kx4" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.346292 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-w9cb6" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.363533 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.364181 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg645\" (UniqueName: \"kubernetes.io/projected/09c58528-2088-4902-ab32-10cd90be0562-kube-api-access-gg645\") pod \"service-ca-operator-777779d784-g5s5p\" (UID: \"09c58528-2088-4902-ab32-10cd90be0562\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5s5p" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.369882 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-l8kn4" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.375397 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5q274" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.377442 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p9ph\" (UniqueName: \"kubernetes.io/projected/6f2d8117-c3e5-498f-8458-e72238d0f0ac-kube-api-access-8p9ph\") pod \"package-server-manager-789f6589d5-l8wxr\" (UID: \"6f2d8117-c3e5-498f-8458-e72238d0f0ac\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l8wxr" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.408823 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:02 crc kubenswrapper[4763]: E0131 14:57:02.409127 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:02.909114257 +0000 UTC m=+142.663852550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.510313 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:02 crc kubenswrapper[4763]: E0131 14:57:02.511215 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:03.011199784 +0000 UTC m=+142.765938077 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.544872 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.554967 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4"] Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.582418 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l8wxr" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.586870 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tv9s8"] Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.588275 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8gxz5"] Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.588441 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5s5p" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.603628 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg"] Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.613670 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:02 crc kubenswrapper[4763]: E0131 14:57:02.614009 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:03.113994203 +0000 UTC m=+142.868732496 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:02 crc kubenswrapper[4763]: W0131 14:57:02.634849 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod229488e3_89a8_4eb4_841e_980db3f8cfb3.slice/crio-ad79a062a0d7376cfab59427c466c0d955dfdc62d38bc98da632da1ceeada219 WatchSource:0}: Error finding container ad79a062a0d7376cfab59427c466c0d955dfdc62d38bc98da632da1ceeada219: Status 404 returned error can't find the container with id ad79a062a0d7376cfab59427c466c0d955dfdc62d38bc98da632da1ceeada219 Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.675060 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2zm8"] Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.706295 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8pcvn"] Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.712070 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2bx5m"] Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.713578 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl"] Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.714263 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:02 crc kubenswrapper[4763]: E0131 14:57:02.715368 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:03.215346348 +0000 UTC m=+142.970084641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.817718 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:02 crc kubenswrapper[4763]: E0131 14:57:02.818265 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:03.31822746 +0000 UTC m=+143.072965753 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:02 crc kubenswrapper[4763]: W0131 14:57:02.823606 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod275ea46d_7a78_4457_a5ba_7b3000170d0e.slice/crio-4c129b0efc8ba7e81700bd2c14706e7d50e9bb2c2ff1bf01d5aadf4cd55835b7 WatchSource:0}: Error finding container 4c129b0efc8ba7e81700bd2c14706e7d50e9bb2c2ff1bf01d5aadf4cd55835b7: Status 404 returned error can't find the container with id 4c129b0efc8ba7e81700bd2c14706e7d50e9bb2c2ff1bf01d5aadf4cd55835b7 Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.905953 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" event={"ID":"330d3fd9-790f-406d-a122-152a1ab07e5c","Type":"ContainerStarted","Data":"4ae66d1e24ee960fa9743e61e520b26c4e663054f89f7c05c1946d2e50245607"} Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.914150 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zznr9" event={"ID":"156e6a74-f3a0-4ae0-8233-36da8946b7d6","Type":"ContainerStarted","Data":"cec37e3b98d59d901f64960a1b70127637b77fe0e693177f08873e4ace80a3a3"} Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.914238 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zznr9" event={"ID":"156e6a74-f3a0-4ae0-8233-36da8946b7d6","Type":"ContainerStarted","Data":"accb3b8144675dcb50127d1e274307b54a06be0debf1f047758f4b388282996b"} Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.914249 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zznr9" event={"ID":"156e6a74-f3a0-4ae0-8233-36da8946b7d6","Type":"ContainerStarted","Data":"4c657c0d5a8e347fa5049810271a5a9b811e2b7f7f8f68e2e0afbf29b28fa299"} Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.918407 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" event={"ID":"e1b409a5-8274-478d-98bf-fe2171d90c63","Type":"ContainerStarted","Data":"7b560d0bfea717d413c6f7f997b55322350cc5d7f6770006679df4dc57bee56a"} Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.920873 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:02 crc kubenswrapper[4763]: E0131 14:57:02.921223 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:03.421210615 +0000 UTC m=+143.175948908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.923680 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-87f9c" event={"ID":"47ac991e-3a26-4da1-9cf0-6f0944a3bf7b","Type":"ContainerStarted","Data":"21097b2e02072a318fcd7dfca12110d4bcc198574a952da3787017a8def86308"} Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.923721 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-87f9c" event={"ID":"47ac991e-3a26-4da1-9cf0-6f0944a3bf7b","Type":"ContainerStarted","Data":"7deed9c9f509572e5a31c37b70b7fba519cc4fd85f5536f01a7eb0069d39f894"} Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.947751 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" event={"ID":"275ea46d-7a78-4457-a5ba-7b3000170d0e","Type":"ContainerStarted","Data":"4c129b0efc8ba7e81700bd2c14706e7d50e9bb2c2ff1bf01d5aadf4cd55835b7"} Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.952882 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l"] Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.959127 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bh727" event={"ID":"db0aea6c-f6f8-4548-905b-22d810b334d4","Type":"ContainerStarted","Data":"342946d2a4a384a6f0a19643ab5d90fa97f53de305c61a5302eccfa96b181f40"} Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.962849 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk" event={"ID":"22ff872b-afc9-4fa7-812b-f47bb3add27c","Type":"ContainerStarted","Data":"2f94de76d368d97664c9676ded0277b97294b97bea27989e794d51fc6c81543e"} Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.978134 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zd45q" event={"ID":"96ce635a-c905-4317-9f6d-64e1437d95c2","Type":"ContainerStarted","Data":"9814c6fa74dbe73ca7edeb7eaf6e6de772b5dac45769694996c928a7aab5d1dc"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.003008 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bwc2g" event={"ID":"5d9ac26c-eb66-4772-b7ee-a6b646092c4b","Type":"ContainerStarted","Data":"2302157f6051cb6945834f1df0aa8520e57ccf4db5e78c1ec9c7e7913ec3cc7d"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.003047 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bwc2g" event={"ID":"5d9ac26c-eb66-4772-b7ee-a6b646092c4b","Type":"ContainerStarted","Data":"f8d05686c2858bc92bd2e41e3aae29ac7958c66e517e0a32923e07d763aae11e"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.011072 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8gxz5" event={"ID":"f6f56211-548f-4d20-9c0a-70108a8f557b","Type":"ContainerStarted","Data":"fbb2377f28aba3b4ba6092b125b0254a2a05b7fc8d467952776c243a93737a08"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.022171 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:03 crc kubenswrapper[4763]: E0131 14:57:03.022527 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:03.522513668 +0000 UTC m=+143.277251961 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.024521 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpplg" event={"ID":"bb78095b-d026-498f-9616-d8365161f809","Type":"ContainerStarted","Data":"53b64499123d2a0acca61ae9705595efe58d6b21354977c8ac2416a44c6b1182"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.024572 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpplg" event={"ID":"bb78095b-d026-498f-9616-d8365161f809","Type":"ContainerStarted","Data":"15d3a12853d42ed33dc8bb430786b504cf974661a23c8706c5f9a3c19fe8f152"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.036295 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75" event={"ID":"4c48bb3f-235a-4dcd-ba1a-62f85f8946ac","Type":"ContainerStarted","Data":"aaa8ba458d3b60d29340529d0fa73caf87da8887c7c23d807766097f50118d7f"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.036340 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75" event={"ID":"4c48bb3f-235a-4dcd-ba1a-62f85f8946ac","Type":"ContainerStarted","Data":"80c11ffb6b6b7f3ace175bbbc27fa7d0fc3da7724863e9fa5d6a7e1ff907022d"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.102531 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-852vg"] Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.123772 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-jj6qz" event={"ID":"7540b5d1-cac8-4c3d-88f1-cd961bf8bd47","Type":"ContainerStarted","Data":"9d35d644de1a97069cbe16d43105c4ac36ecd501fe816d2356dce1f49d07b428"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.123814 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-jj6qz" event={"ID":"7540b5d1-cac8-4c3d-88f1-cd961bf8bd47","Type":"ContainerStarted","Data":"406b873df247435cb3b0a54b13deae5bf7bfd31bd8b90d2dec68d3da3b8a4441"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.124219 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.124394 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-flcgf"] Jan 31 14:57:03 crc kubenswrapper[4763]: E0131 14:57:03.125027 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:03.625011658 +0000 UTC m=+143.379749951 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.132068 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" event={"ID":"229488e3-89a8-4eb4-841e-980db3f8cfb3","Type":"ContainerStarted","Data":"ad79a062a0d7376cfab59427c466c0d955dfdc62d38bc98da632da1ceeada219"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.133821 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9lvgt" event={"ID":"0cddc243-3a83-4398-87a9-7a111581bec5","Type":"ContainerStarted","Data":"5c62f7ee183dfe1e029224782212786ff5c04dc0638b114fef841d2564e55e45"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.134167 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9lvgt" event={"ID":"0cddc243-3a83-4398-87a9-7a111581bec5","Type":"ContainerStarted","Data":"bb26d6e3f53b07cc5f909df05bd03481448ccbd13317a7a9613866e7d60a68d9"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.136444 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-w9cb6" event={"ID":"a766d0cf-2406-4406-aaec-51a9da3d6b55","Type":"ContainerStarted","Data":"e67dbae70f3848319fd820499e081558e078d976eda5c4b905a0590c1684b2c7"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.138713 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2jtkm" event={"ID":"65c4b2d3-8915-480e-abf5-3b3e0184f778","Type":"ContainerStarted","Data":"d8872e708ab881cc8856cf37a1b15fe5412e07d9e31d6719c2e889aa0b19dd4a"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.141598 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-75x7z" event={"ID":"ed007985-f681-4a45-a71a-ba27798fa94d","Type":"ContainerStarted","Data":"8a11bb5171ba5d9dc23d479eca3191eb0e7b909abf910bef0442cc55d9f1add6"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.141638 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-75x7z" event={"ID":"ed007985-f681-4a45-a71a-ba27798fa94d","Type":"ContainerStarted","Data":"9b91deda393240ac0e049bdae4b0ccfb5212211e9a307158fbaf777467aadc40"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.145281 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-mjbd9" event={"ID":"d2c4bb39-a442-4316-81a9-d5e8f9d10eaa","Type":"ContainerStarted","Data":"29dc94a7cacd0d6782c91a85697e89bb52396ab2bc3c603de238426daecf7d28"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.145319 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-mjbd9" event={"ID":"d2c4bb39-a442-4316-81a9-d5e8f9d10eaa","Type":"ContainerStarted","Data":"7f20fa19d5bb86f940e6da3f6f5021f747522c3be2c435828c18b5855f767596"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.145782 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-mjbd9" Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.150717 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" event={"ID":"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed","Type":"ContainerStarted","Data":"3803ed51792b02386aa7ef87726ea6a9518fc3a8a86643b1d5b77ebee1f3ad72"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.151737 4763 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-bpxtg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.151778 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" podUID="b67dedfb-accc-467d-a3bb-508eab4f88c8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.152039 4763 patch_prober.go:28] interesting pod/console-operator-58897d9998-mjbd9 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/readyz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.152066 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-mjbd9" podUID="d2c4bb39-a442-4316-81a9-d5e8f9d10eaa" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/readyz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.226807 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:03 crc kubenswrapper[4763]: E0131 14:57:03.227195 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:03.727181268 +0000 UTC m=+143.481919561 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.327376 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:03 crc kubenswrapper[4763]: E0131 14:57:03.328792 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:03.82877447 +0000 UTC m=+143.583512763 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.429661 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:03 crc kubenswrapper[4763]: E0131 14:57:03.429977 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:03.929965379 +0000 UTC m=+143.684703662 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.432707 4763 csr.go:261] certificate signing request csr-v6bb7 is approved, waiting to be issued Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.443875 4763 csr.go:257] certificate signing request csr-v6bb7 is issued Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.534001 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:03 crc kubenswrapper[4763]: E0131 14:57:03.534195 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:04.034163122 +0000 UTC m=+143.788901425 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.534364 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:03 crc kubenswrapper[4763]: E0131 14:57:03.534758 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:04.034746801 +0000 UTC m=+143.789485094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:03 crc kubenswrapper[4763]: W0131 14:57:03.625846 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80834dac_7e21_4dda_8f32_3a19eced5753.slice/crio-d711737b7197c600a56b4679bf5d2fde19b116c0e1902972f711cf31ad72c074 WatchSource:0}: Error finding container d711737b7197c600a56b4679bf5d2fde19b116c0e1902972f711cf31ad72c074: Status 404 returned error can't find the container with id d711737b7197c600a56b4679bf5d2fde19b116c0e1902972f711cf31ad72c074 Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.637312 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:03 crc kubenswrapper[4763]: E0131 14:57:03.637582 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:04.137566531 +0000 UTC m=+143.892304824 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.723054 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2jtkm" podStartSLOduration=123.723037609 podStartE2EDuration="2m3.723037609s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:03.714370099 +0000 UTC m=+143.469108392" watchObservedRunningTime="2026-01-31 14:57:03.723037609 +0000 UTC m=+143.477775902" Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.738627 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:03 crc kubenswrapper[4763]: E0131 14:57:03.739025 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:04.239013549 +0000 UTC m=+143.993751842 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.752909 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" podStartSLOduration=123.752892232 podStartE2EDuration="2m3.752892232s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:03.750358453 +0000 UTC m=+143.505096746" watchObservedRunningTime="2026-01-31 14:57:03.752892232 +0000 UTC m=+143.507630525" Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.839630 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:03 crc kubenswrapper[4763]: E0131 14:57:03.840107 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:04.340092664 +0000 UTC m=+144.094830947 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.849133 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-9lvgt" podStartSLOduration=123.849113935 podStartE2EDuration="2m3.849113935s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:03.846811504 +0000 UTC m=+143.601549797" watchObservedRunningTime="2026-01-31 14:57:03.849113935 +0000 UTC m=+143.603852228" Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.850100 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zznr9" podStartSLOduration=122.850090046 podStartE2EDuration="2m2.850090046s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:03.802521521 +0000 UTC m=+143.557259814" watchObservedRunningTime="2026-01-31 14:57:03.850090046 +0000 UTC m=+143.604828339" Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.887311 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.891834 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-mjbd9" podStartSLOduration=123.891820479 podStartE2EDuration="2m3.891820479s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:03.890195359 +0000 UTC m=+143.644933652" watchObservedRunningTime="2026-01-31 14:57:03.891820479 +0000 UTC m=+143.646558772" Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.898647 4763 patch_prober.go:28] interesting pod/router-default-5444994796-87f9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:57:03 crc kubenswrapper[4763]: [-]has-synced failed: reason withheld Jan 31 14:57:03 crc kubenswrapper[4763]: [+]process-running ok Jan 31 14:57:03 crc kubenswrapper[4763]: healthz check failed Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.898712 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87f9c" podUID="47ac991e-3a26-4da1-9cf0-6f0944a3bf7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.942638 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:03 crc kubenswrapper[4763]: E0131 14:57:03.943124 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:04.44311258 +0000 UTC m=+144.197850873 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.043741 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:04 crc kubenswrapper[4763]: E0131 14:57:04.044586 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:04.544571838 +0000 UTC m=+144.299310121 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.100608 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-bwc2g" podStartSLOduration=123.100593637 podStartE2EDuration="2m3.100593637s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:04.098383408 +0000 UTC m=+143.853121701" watchObservedRunningTime="2026-01-31 14:57:04.100593637 +0000 UTC m=+143.855331920" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.101070 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" podStartSLOduration=124.101066642 podStartE2EDuration="2m4.101066642s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:04.044444444 +0000 UTC m=+143.799182737" watchObservedRunningTime="2026-01-31 14:57:04.101066642 +0000 UTC m=+143.855804935" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.133985 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zd45q" podStartSLOduration=125.13397017 podStartE2EDuration="2m5.13397017s" podCreationTimestamp="2026-01-31 14:54:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:04.13208898 +0000 UTC m=+143.886827273" watchObservedRunningTime="2026-01-31 14:57:04.13397017 +0000 UTC m=+143.888708463" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.150077 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:04 crc kubenswrapper[4763]: E0131 14:57:04.150421 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:04.650409813 +0000 UTC m=+144.405148106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.176113 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-87f9c" podStartSLOduration=123.176094355 podStartE2EDuration="2m3.176094355s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:04.174555197 +0000 UTC m=+143.929293490" watchObservedRunningTime="2026-01-31 14:57:04.176094355 +0000 UTC m=+143.930832648" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.194973 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" event={"ID":"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed","Type":"ContainerStarted","Data":"4b7ac09e1683fd8dabfb4c5b5de8cfe0c575bae9dae1b7e1cae4dde0791c0d82"} Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.195442 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z8wss"] Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.213194 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-75x7z" podStartSLOduration=124.213175862 podStartE2EDuration="2m4.213175862s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:04.209768236 +0000 UTC m=+143.964506539" watchObservedRunningTime="2026-01-31 14:57:04.213175862 +0000 UTC m=+143.967914155" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.219292 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-75x7z" event={"ID":"ed007985-f681-4a45-a71a-ba27798fa94d","Type":"ContainerStarted","Data":"3b3d27c7780aa1060916843040308355861c6b2ef9a619eff2f2055cd8fd347c"} Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.245682 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75" event={"ID":"4c48bb3f-235a-4dcd-ba1a-62f85f8946ac","Type":"ContainerStarted","Data":"8d7be5ea0747f83a9960f08fae5ca06d25aa6560575d596c4a703419afc72e37"} Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.252925 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:04 crc kubenswrapper[4763]: E0131 14:57:04.253221 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:04.753207732 +0000 UTC m=+144.507946025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.268766 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpplg" event={"ID":"bb78095b-d026-498f-9616-d8365161f809","Type":"ContainerStarted","Data":"ea59f192bd69de9016c2dc9d98c422f3e9eb57b32b5ec14b8b4422e72f9c86d0"} Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.312309 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2bx5m" event={"ID":"31cdca6f-11b2-4888-9a4c-4b06a94d1863","Type":"ContainerStarted","Data":"e88da184c6479564b9ca190b87ef6ab8b8d18a0af16f5321592d0f027029a41b"} Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.324606 4763 generic.go:334] "Generic (PLEG): container finished" podID="229488e3-89a8-4eb4-841e-980db3f8cfb3" containerID="481c9ea0495b01fa7dccab3a794e907a38c5c8dcc3be73e28b755c124a63743c" exitCode=0 Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.324670 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" event={"ID":"229488e3-89a8-4eb4-841e-980db3f8cfb3","Type":"ContainerDied","Data":"481c9ea0495b01fa7dccab3a794e907a38c5c8dcc3be73e28b755c124a63743c"} Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.328973 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bh727" event={"ID":"db0aea6c-f6f8-4548-905b-22d810b334d4","Type":"ContainerStarted","Data":"97a86a7bd66eecb4d677a41a10e65f5357767ed64bb5a9943a51a8d83c59018c"} Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.329891 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-bh727" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.334642 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" podStartSLOduration=123.334630784 podStartE2EDuration="2m3.334630784s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:04.239540815 +0000 UTC m=+143.994279108" watchObservedRunningTime="2026-01-31 14:57:04.334630784 +0000 UTC m=+144.089369077" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.343472 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-852vg" event={"ID":"80834dac-7e21-4dda-8f32-3a19eced5753","Type":"ContainerStarted","Data":"d711737b7197c600a56b4679bf5d2fde19b116c0e1902972f711cf31ad72c074"} Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.355520 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:04 crc kubenswrapper[4763]: E0131 14:57:04.356788 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:04.856773006 +0000 UTC m=+144.611511299 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.358960 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" event={"ID":"e1b409a5-8274-478d-98bf-fe2171d90c63","Type":"ContainerStarted","Data":"c58f9ab4796cf505743aa587609be5cc9b4b478501a00ba9268fe2ccf4583f9a"} Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.359727 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.360008 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc"] Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.371841 4763 patch_prober.go:28] interesting pod/downloads-7954f5f757-bh727 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.371915 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bh727" podUID="db0aea6c-f6f8-4548-905b-22d810b334d4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.391914 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-g5s5p"] Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.401911 4763 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-mpmpg container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.401960 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" podUID="e1b409a5-8274-478d-98bf-fe2171d90c63" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.449510 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-31 14:52:03 +0000 UTC, rotation deadline is 2026-12-15 22:59:32.996754159 +0000 UTC Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.449536 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7640h2m28.547220467s for next certificate rotation Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.455997 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:04 crc kubenswrapper[4763]: E0131 14:57:04.457198 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:04.957180031 +0000 UTC m=+144.711918334 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.489044 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" event={"ID":"ac92922f-89ed-41e7-bf6f-9750efc9cab0","Type":"ContainerStarted","Data":"310a57aebb3bfd5c0ed0a647bdc7e7ac8b0d23fcde9f6c65a6d7e2d70cf1f25c"} Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.489830 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j7kx4"] Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.525542 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l8wxr"] Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.526663 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk" event={"ID":"22ff872b-afc9-4fa7-812b-f47bb3add27c","Type":"ContainerStarted","Data":"0a89a45cd53ac1bcadb3cffc2d778cf2a244867f5fc70e60b74d488a4a697683"} Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.526975 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh"] Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.547651 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2zm8" event={"ID":"8edb2bab-1e72-4b68-afed-2de0572a1071","Type":"ContainerStarted","Data":"8ac8fcada164283b16fcd66472958b240aadf1c8717ff8aafd5b3e4cca1cd45c"} Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.559823 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:04 crc kubenswrapper[4763]: E0131 14:57:04.560213 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:05.060199247 +0000 UTC m=+144.814937540 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.564827 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989"] Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.567263 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dncp9"] Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.568738 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5kfwr"] Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.570534 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-w9cb6" event={"ID":"a766d0cf-2406-4406-aaec-51a9da3d6b55","Type":"ContainerStarted","Data":"6b53d085f65466df248693cceb17eeed33350b009fad18662f02f10d66da77f5"} Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.578979 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5q274"] Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.585420 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l" event={"ID":"e043d261-8774-411b-be6d-98dbb1f210a2","Type":"ContainerStarted","Data":"847da603210653d6a6eb4798db67cbcda9a95e127f4d0f0dee083dd1d9416037"} Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.586291 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.609877 4763 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-jtt2l container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.609942 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l" podUID="e043d261-8774-411b-be6d-98dbb1f210a2" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.613661 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2jtkm" event={"ID":"65c4b2d3-8915-480e-abf5-3b3e0184f778","Type":"ContainerStarted","Data":"a78dffec60999af2daf653657dc248a422023304b87076476a27df8e3f097cf7"} Jan 31 14:57:04 crc kubenswrapper[4763]: W0131 14:57:04.636045 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c40a34_73d2_4a28_b2bd_31e19e6361d2.slice/crio-1d970cab673873dbd0ffb836dc82303a0a52ef1bbbe7617fdf76121fb69fc8f6 WatchSource:0}: Error finding container 1d970cab673873dbd0ffb836dc82303a0a52ef1bbbe7617fdf76121fb69fc8f6: Status 404 returned error can't find the container with id 1d970cab673873dbd0ffb836dc82303a0a52ef1bbbe7617fdf76121fb69fc8f6 Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.637750 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" event={"ID":"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc","Type":"ContainerStarted","Data":"ed7d3da6199e8bb4c55e177b1afca8ac78c017a1ea997eff233008f48616b7c8"} Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.637777 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.645945 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gxcjc"] Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.679640 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.685031 4763 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-flcgf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.685098 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" podUID="c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 31 14:57:04 crc kubenswrapper[4763]: E0131 14:57:04.703211 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:05.203185511 +0000 UTC m=+144.957923804 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.706612 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.711256 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" podStartSLOduration=123.711232072 podStartE2EDuration="2m3.711232072s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:04.702868811 +0000 UTC m=+144.457607124" watchObservedRunningTime="2026-01-31 14:57:04.711232072 +0000 UTC m=+144.465970375" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.745745 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-l8kn4"] Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.746551 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj"] Jan 31 14:57:04 crc kubenswrapper[4763]: E0131 14:57:04.746966 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:05.246947728 +0000 UTC m=+145.001686011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.756187 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.777136 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-bh727" podStartSLOduration=124.777118349 podStartE2EDuration="2m4.777118349s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:04.766051454 +0000 UTC m=+144.520789747" watchObservedRunningTime="2026-01-31 14:57:04.777118349 +0000 UTC m=+144.531856642" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.792682 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-w9cb6" podStartSLOduration=5.792666365 podStartE2EDuration="5.792666365s" podCreationTimestamp="2026-01-31 14:56:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:04.792350325 +0000 UTC m=+144.547088618" watchObservedRunningTime="2026-01-31 14:57:04.792666365 +0000 UTC m=+144.547404658" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.811233 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:04 crc kubenswrapper[4763]: E0131 14:57:04.812429 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:05.312406041 +0000 UTC m=+145.067144334 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.883779 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75" podStartSLOduration=123.883763309 podStartE2EDuration="2m3.883763309s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:04.88154143 +0000 UTC m=+144.636279723" watchObservedRunningTime="2026-01-31 14:57:04.883763309 +0000 UTC m=+144.638501602" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.889637 4763 patch_prober.go:28] interesting pod/router-default-5444994796-87f9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:57:04 crc kubenswrapper[4763]: [-]has-synced failed: reason withheld Jan 31 14:57:04 crc kubenswrapper[4763]: [+]process-running ok Jan 31 14:57:04 crc kubenswrapper[4763]: healthz check failed Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.889749 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87f9c" podUID="47ac991e-3a26-4da1-9cf0-6f0944a3bf7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.917236 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:04 crc kubenswrapper[4763]: E0131 14:57:04.917579 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:05.417563824 +0000 UTC m=+145.172302117 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.982682 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpplg" podStartSLOduration=123.982666397 podStartE2EDuration="2m3.982666397s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:04.978973332 +0000 UTC m=+144.733711625" watchObservedRunningTime="2026-01-31 14:57:04.982666397 +0000 UTC m=+144.737404690" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.982830 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2zm8" podStartSLOduration=123.982824932 podStartE2EDuration="2m3.982824932s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:04.940631545 +0000 UTC m=+144.695369838" watchObservedRunningTime="2026-01-31 14:57:04.982824932 +0000 UTC m=+144.737563235" Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.025900 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:05 crc kubenswrapper[4763]: E0131 14:57:05.027636 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:05.52760301 +0000 UTC m=+145.282341303 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.093376 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk" podStartSLOduration=124.093358083 podStartE2EDuration="2m4.093358083s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:05.043432654 +0000 UTC m=+144.798170947" watchObservedRunningTime="2026-01-31 14:57:05.093358083 +0000 UTC m=+144.848096376" Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.125839 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l" podStartSLOduration=124.125824607 podStartE2EDuration="2m4.125824607s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:05.124235187 +0000 UTC m=+144.878973480" watchObservedRunningTime="2026-01-31 14:57:05.125824607 +0000 UTC m=+144.880562900" Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.128389 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:05 crc kubenswrapper[4763]: E0131 14:57:05.128670 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:05.628660435 +0000 UTC m=+145.383398728 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.230180 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:05 crc kubenswrapper[4763]: E0131 14:57:05.230611 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:05.730581187 +0000 UTC m=+145.485319480 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.332471 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:05 crc kubenswrapper[4763]: E0131 14:57:05.333221 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:05.833208401 +0000 UTC m=+145.587946694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.433704 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:05 crc kubenswrapper[4763]: E0131 14:57:05.434000 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:05.933986008 +0000 UTC m=+145.688724301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.536095 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:05 crc kubenswrapper[4763]: E0131 14:57:05.536390 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:06.036379315 +0000 UTC m=+145.791117608 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.636817 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:05 crc kubenswrapper[4763]: E0131 14:57:05.637423 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:06.137398609 +0000 UTC m=+145.892136892 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.637688 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:05 crc kubenswrapper[4763]: E0131 14:57:05.637996 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:06.137988808 +0000 UTC m=+145.892727101 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.639454 4763 patch_prober.go:28] interesting pod/console-operator-58897d9998-mjbd9 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.639489 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-mjbd9" podUID="d2c4bb39-a442-4316-81a9-d5e8f9d10eaa" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.670952 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" event={"ID":"d73a5142-56cf-4676-a6f1-a00868938c4d","Type":"ContainerStarted","Data":"456d6b1a1346cc108faafe6926b4f9472b7ce543eca0663c104667376f4cf961"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.698420 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l8wxr" event={"ID":"6f2d8117-c3e5-498f-8458-e72238d0f0ac","Type":"ContainerStarted","Data":"82ba72f4fa7df2f9ad6025ec5cbca994e78b07f2d0f1c87ff0e2cb5e8acf9209"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.698464 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l8wxr" event={"ID":"6f2d8117-c3e5-498f-8458-e72238d0f0ac","Type":"ContainerStarted","Data":"d1d40c0ba02382f5beb79752d92ba13fd4d270ccec6ecf859e594ba69a4c07ab"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.738427 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:05 crc kubenswrapper[4763]: E0131 14:57:05.739019 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:06.239003871 +0000 UTC m=+145.993742164 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.755454 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk" event={"ID":"22ff872b-afc9-4fa7-812b-f47bb3add27c","Type":"ContainerStarted","Data":"82b5eae420694e7d1719ccdf851ff890cbaff02f0c8642080ef507bd17f9e608"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.759167 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-l8kn4" event={"ID":"1113d5ad-40c9-412f-92c2-2fb0d6ec2903","Type":"ContainerStarted","Data":"c2c009a175d1429c4cf224d15428e2d57bdfaac9033034c3b7e86bcdd0238516"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.773121 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj" event={"ID":"301a24de-a6b1-45a1-a12d-663325e45fd6","Type":"ContainerStarted","Data":"287f4fc003336ab101921f30547c2c01a6faf9dd00bef9fa49e58d548c2d25b7"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.778425 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2zm8" event={"ID":"8edb2bab-1e72-4b68-afed-2de0572a1071","Type":"ContainerStarted","Data":"01fa4333d4a253cf405c341595ed734764be15b4283c12766510142007a216d5"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.783252 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5s5p" event={"ID":"09c58528-2088-4902-ab32-10cd90be0562","Type":"ContainerStarted","Data":"fb1a983d49894ff0ab18fd981047fa43bf0a6e15b9bdc699979c338f80cca563"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.783289 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5s5p" event={"ID":"09c58528-2088-4902-ab32-10cd90be0562","Type":"ContainerStarted","Data":"37ae68a5f4012911c14e0ed16b914774014b1b5d06ac7abb6359499bf76b96b8"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.794059 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5q274" event={"ID":"92ef1804-52cd-46a1-86e1-baf561981f8b","Type":"ContainerStarted","Data":"a1fc8efab5c39584667245f36e76d20cdb077bb01f0938d4bd2956edf41c2ec0"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.794103 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5q274" event={"ID":"92ef1804-52cd-46a1-86e1-baf561981f8b","Type":"ContainerStarted","Data":"10a780a402cf35fb0023055561f6e9174995c847c625f8d217f1bb2a9e1ca8f6"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.802743 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" event={"ID":"275ea46d-7a78-4457-a5ba-7b3000170d0e","Type":"ContainerStarted","Data":"003c08e2c29d3c5b59c6a1fd94bb61e81bf58c9d936a5ff50b3a74fa00ee51de"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.803603 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.805049 4763 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-8pcvn container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.805085 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" podUID="275ea46d-7a78-4457-a5ba-7b3000170d0e" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.818067 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l" event={"ID":"e043d261-8774-411b-be6d-98dbb1f210a2","Type":"ContainerStarted","Data":"c6e9c0d00cddc92097f62e8b77dd4b4fe7135848d8ef3253a47190cdd69ff201"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.818727 4763 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-jtt2l container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.818774 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l" podUID="e043d261-8774-411b-be6d-98dbb1f210a2" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.825055 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" podStartSLOduration=124.825043378 podStartE2EDuration="2m4.825043378s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:05.167121367 +0000 UTC m=+144.921859660" watchObservedRunningTime="2026-01-31 14:57:05.825043378 +0000 UTC m=+145.579781661" Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.827706 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-jj6qz" event={"ID":"7540b5d1-cac8-4c3d-88f1-cd961bf8bd47","Type":"ContainerStarted","Data":"62a66bfd6cc767e7ee618a824626029933567e5c34bce3b55de02aa12eaba356"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.839188 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989" event={"ID":"20c40a34-73d2-4a28-b2bd-31e19e6361d2","Type":"ContainerStarted","Data":"f8f3a2ee5fed8706cd33e083136df0eff635e736dd9b8ba1a9267757cea26ad5"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.839228 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989" event={"ID":"20c40a34-73d2-4a28-b2bd-31e19e6361d2","Type":"ContainerStarted","Data":"1d970cab673873dbd0ffb836dc82303a0a52ef1bbbe7617fdf76121fb69fc8f6"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.839962 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:05 crc kubenswrapper[4763]: E0131 14:57:05.840247 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:06.340237402 +0000 UTC m=+146.094975695 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.855266 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" event={"ID":"229488e3-89a8-4eb4-841e-980db3f8cfb3","Type":"ContainerStarted","Data":"c3a8270916cc07fc87a5bb9ec70366ec16a3ee818b16c81fc196bea7f0ce0a8b"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.855807 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.894533 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-5q274" podStartSLOduration=6.894515916 podStartE2EDuration="6.894515916s" podCreationTimestamp="2026-01-31 14:56:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:05.894349131 +0000 UTC m=+145.649087424" watchObservedRunningTime="2026-01-31 14:57:05.894515916 +0000 UTC m=+145.649254209" Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.900329 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh" event={"ID":"74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba","Type":"ContainerStarted","Data":"ef3640ba5dfd6ee8c2b57c29cccccc184137dd794ae305f3c34e1425040e217c"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.900378 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh" event={"ID":"74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba","Type":"ContainerStarted","Data":"8b7e49deeb4b590ab939f5487365f8533b6127d34db0b1012b0ed72a2f526d5b"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.901151 4763 patch_prober.go:28] interesting pod/router-default-5444994796-87f9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:57:05 crc kubenswrapper[4763]: [-]has-synced failed: reason withheld Jan 31 14:57:05 crc kubenswrapper[4763]: [+]process-running ok Jan 31 14:57:05 crc kubenswrapper[4763]: healthz check failed Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.901187 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87f9c" podUID="47ac991e-3a26-4da1-9cf0-6f0944a3bf7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.906928 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-852vg" event={"ID":"80834dac-7e21-4dda-8f32-3a19eced5753","Type":"ContainerStarted","Data":"5a5f87c9f3f86b2cd1777b046b1ba01a718f729a1b1d1fae362a9b609f49f66d"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.914239 4763 generic.go:334] "Generic (PLEG): container finished" podID="ac92922f-89ed-41e7-bf6f-9750efc9cab0" containerID="f9860ce320981dfa3284da2e14b26b648c5f109f25564f3662c4878ec6345fe6" exitCode=0 Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.914309 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" event={"ID":"ac92922f-89ed-41e7-bf6f-9750efc9cab0","Type":"ContainerStarted","Data":"bcbfe9cd2bfe6b0b1eb74ad0ee4bc4a1cd47e8b3421a3856246fb3b74e353a90"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.914335 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" event={"ID":"ac92922f-89ed-41e7-bf6f-9750efc9cab0","Type":"ContainerDied","Data":"f9860ce320981dfa3284da2e14b26b648c5f109f25564f3662c4878ec6345fe6"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.908949 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5s5p" podStartSLOduration=124.908932106 podStartE2EDuration="2m4.908932106s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:05.82993269 +0000 UTC m=+145.584670983" watchObservedRunningTime="2026-01-31 14:57:05.908932106 +0000 UTC m=+145.663670399" Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.916525 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z8wss" event={"ID":"f313ecec-c631-4270-a297-51e482e3e306","Type":"ContainerStarted","Data":"00626847b6f2806ccf97368186ec88e0c96ba648c1756555d03911581d4e1a36"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.916556 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z8wss" event={"ID":"f313ecec-c631-4270-a297-51e482e3e306","Type":"ContainerStarted","Data":"6ffd71abac5d892e02e137909c1876c77fbdda37cd96e58f3b5b2f19eaabe01c"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.917461 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2bx5m" event={"ID":"31cdca6f-11b2-4888-9a4c-4b06a94d1863","Type":"ContainerStarted","Data":"84ede2fa918650a3ab16f7cf7fc4537f323f3b2fa26ed2dfdf2587ace448968c"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.923356 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" podStartSLOduration=125.923341497 podStartE2EDuration="2m5.923341497s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:05.923308056 +0000 UTC m=+145.678046359" watchObservedRunningTime="2026-01-31 14:57:05.923341497 +0000 UTC m=+145.678079790" Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.926846 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-gxcjc" event={"ID":"e6758143-5085-416e-9bdc-856a520c71de","Type":"ContainerStarted","Data":"123cec0865c0f54326439a7598a17a0d3df451d9acbfa2dc9a65f116dbb9ed57"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.944348 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:05 crc kubenswrapper[4763]: E0131 14:57:05.945329 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:06.445309152 +0000 UTC m=+146.200047445 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.952678 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-jj6qz" podStartSLOduration=125.952662302 podStartE2EDuration="2m5.952662302s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:05.951097954 +0000 UTC m=+145.705836247" watchObservedRunningTime="2026-01-31 14:57:05.952662302 +0000 UTC m=+145.707400595" Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.961763 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j7kx4" event={"ID":"1dc5ca38-64fe-41f8-a989-0b035bf29414","Type":"ContainerStarted","Data":"32e7a943841330c0d61412a6e339704c21b1a02faa72959de721c41156cfd620"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.961809 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j7kx4" event={"ID":"1dc5ca38-64fe-41f8-a989-0b035bf29414","Type":"ContainerStarted","Data":"2bfd671e8be6466ee4787ab40466107dd31d44c964fd6dd13fff675b9ba5b71b"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.969273 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" event={"ID":"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc","Type":"ContainerStarted","Data":"f08a6750497f2109f855a8140586391b641ccc75c075f25e6e29d0621151a5ca"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.969345 4763 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-flcgf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.969405 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" podUID="c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.970835 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dncp9" event={"ID":"a7826828-7856-44a4-be9f-f1a939950c3e","Type":"ContainerStarted","Data":"510b0d29eff4a9d0163deefbfdaa8c49369e26b47e5cbee8f26b1a4e7c32c84b"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.970869 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dncp9" event={"ID":"a7826828-7856-44a4-be9f-f1a939950c3e","Type":"ContainerStarted","Data":"fddd83b22bce90c8677da04cc954fae7fac22d84fd2ad39e5e28efa0cfbe964e"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.973417 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" event={"ID":"84d11c6f-169b-4e21-87ec-8bb8930a1831","Type":"ContainerStarted","Data":"d42738e8b3360dca7ad3b81163d6097426276836ef34fbfd4844187a0b964616"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.973439 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" event={"ID":"84d11c6f-169b-4e21-87ec-8bb8930a1831","Type":"ContainerStarted","Data":"aa78659a0d31dbb475b1238e21a6de3fb7caa06077698a581b59b799d15e1656"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.974007 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.975102 4763 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-f7fgc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.975129 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" podUID="84d11c6f-169b-4e21-87ec-8bb8930a1831" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.976403 4763 generic.go:334] "Generic (PLEG): container finished" podID="330d3fd9-790f-406d-a122-152a1ab07e5c" containerID="9d8f1179a12c329107a54a8a722d4357264cb30c5b2390643df7556521ff5d6f" exitCode=0 Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.976472 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" event={"ID":"330d3fd9-790f-406d-a122-152a1ab07e5c","Type":"ContainerDied","Data":"9d8f1179a12c329107a54a8a722d4357264cb30c5b2390643df7556521ff5d6f"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.977630 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2bx5m" podStartSLOduration=125.977614532 podStartE2EDuration="2m5.977614532s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:05.975481085 +0000 UTC m=+145.730219378" watchObservedRunningTime="2026-01-31 14:57:05.977614532 +0000 UTC m=+145.732352825" Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.991317 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8gxz5" event={"ID":"f6f56211-548f-4d20-9c0a-70108a8f557b","Type":"ContainerStarted","Data":"52b3223f73d8c49aceeccc88ac29abed3505575d49692c316c27ec46e81800d4"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.992670 4763 patch_prober.go:28] interesting pod/downloads-7954f5f757-bh727 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.992838 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bh727" podUID="db0aea6c-f6f8-4548-905b-22d810b334d4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.005782 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh" podStartSLOduration=125.00576743 podStartE2EDuration="2m5.00576743s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:06.003947613 +0000 UTC m=+145.758685906" watchObservedRunningTime="2026-01-31 14:57:06.00576743 +0000 UTC m=+145.760505723" Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.008187 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.043513 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989" podStartSLOduration=126.043494779 podStartE2EDuration="2m6.043494779s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:06.039208765 +0000 UTC m=+145.793947058" watchObservedRunningTime="2026-01-31 14:57:06.043494779 +0000 UTC m=+145.798233072" Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.046003 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:06 crc kubenswrapper[4763]: E0131 14:57:06.047370 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:06.547357838 +0000 UTC m=+146.302096131 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.123785 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z8wss" podStartSLOduration=125.123766594 podStartE2EDuration="2m5.123766594s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:06.123439934 +0000 UTC m=+145.878178227" watchObservedRunningTime="2026-01-31 14:57:06.123766594 +0000 UTC m=+145.878504887" Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.125252 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" podStartSLOduration=125.12524589 podStartE2EDuration="2m5.12524589s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:06.095991197 +0000 UTC m=+145.850729490" watchObservedRunningTime="2026-01-31 14:57:06.12524589 +0000 UTC m=+145.879984183" Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.147246 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:06 crc kubenswrapper[4763]: E0131 14:57:06.148514 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:06.648500657 +0000 UTC m=+146.403238950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.207963 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" podStartSLOduration=126.207945192 podStartE2EDuration="2m6.207945192s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:06.161270426 +0000 UTC m=+145.916008719" watchObservedRunningTime="2026-01-31 14:57:06.207945192 +0000 UTC m=+145.962683485" Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.208912 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" podStartSLOduration=125.208906863 podStartE2EDuration="2m5.208906863s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:06.206396124 +0000 UTC m=+145.961134417" watchObservedRunningTime="2026-01-31 14:57:06.208906863 +0000 UTC m=+145.963645156" Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.249628 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:06 crc kubenswrapper[4763]: E0131 14:57:06.249953 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:06.749941914 +0000 UTC m=+146.504680207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.279827 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8gxz5" podStartSLOduration=125.279811826 podStartE2EDuration="2m5.279811826s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:06.277281668 +0000 UTC m=+146.032019971" watchObservedRunningTime="2026-01-31 14:57:06.279811826 +0000 UTC m=+146.034550109" Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.337576 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j7kx4" podStartSLOduration=125.3375616 podStartE2EDuration="2m5.3375616s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:06.336078593 +0000 UTC m=+146.090816886" watchObservedRunningTime="2026-01-31 14:57:06.3375616 +0000 UTC m=+146.092299893" Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.358662 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:06 crc kubenswrapper[4763]: E0131 14:57:06.359136 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:06.859117702 +0000 UTC m=+146.613855995 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.431319 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dncp9" podStartSLOduration=125.431304957 podStartE2EDuration="2m5.431304957s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:06.430284784 +0000 UTC m=+146.185023087" watchObservedRunningTime="2026-01-31 14:57:06.431304957 +0000 UTC m=+146.186043250" Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.460512 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:06 crc kubenswrapper[4763]: E0131 14:57:06.461234 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:06.9612134 +0000 UTC m=+146.715951753 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.562187 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:06 crc kubenswrapper[4763]: E0131 14:57:06.562516 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:07.062496202 +0000 UTC m=+146.817234495 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.664111 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:06 crc kubenswrapper[4763]: E0131 14:57:06.664453 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:07.164427265 +0000 UTC m=+146.919165558 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.765353 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:06 crc kubenswrapper[4763]: E0131 14:57:06.765657 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:07.265643204 +0000 UTC m=+147.020381497 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.866640 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:06 crc kubenswrapper[4763]: E0131 14:57:06.867086 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:07.366952428 +0000 UTC m=+147.121690721 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.886336 4763 patch_prober.go:28] interesting pod/router-default-5444994796-87f9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:57:06 crc kubenswrapper[4763]: [-]has-synced failed: reason withheld Jan 31 14:57:06 crc kubenswrapper[4763]: [+]process-running ok Jan 31 14:57:06 crc kubenswrapper[4763]: healthz check failed Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.886394 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87f9c" podUID="47ac991e-3a26-4da1-9cf0-6f0944a3bf7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.968055 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:06 crc kubenswrapper[4763]: E0131 14:57:06.968192 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:07.468174588 +0000 UTC m=+147.222912881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.968269 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:06 crc kubenswrapper[4763]: E0131 14:57:06.968550 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:07.468542459 +0000 UTC m=+147.223280752 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.003755 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-l8kn4" event={"ID":"1113d5ad-40c9-412f-92c2-2fb0d6ec2903","Type":"ContainerStarted","Data":"c367a26e2ece0070291348bc7efa7b56efa140c5b2ed83cb252a5a15f57c810a"} Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.005281 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj" event={"ID":"301a24de-a6b1-45a1-a12d-663325e45fd6","Type":"ContainerStarted","Data":"91fb83c7a4ae79d09f15b0ccba3ad87c567a9a7d0bb06a687765d025b7cee5f3"} Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.005678 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj" Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.007280 4763 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4fhtj container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.007322 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj" podUID="301a24de-a6b1-45a1-a12d-663325e45fd6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.007349 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-gxcjc" event={"ID":"e6758143-5085-416e-9bdc-856a520c71de","Type":"ContainerStarted","Data":"0dc0630ff026b9c2e13627d0fe50273a3b95b3aa41e0eec755870b558d1668bd"} Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.019986 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" event={"ID":"330d3fd9-790f-406d-a122-152a1ab07e5c","Type":"ContainerStarted","Data":"a76ce5d3db29e4f9da884517c12ae377269007818566deb9c9386ee1cd4fd25a"} Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.023048 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l8wxr" event={"ID":"6f2d8117-c3e5-498f-8458-e72238d0f0ac","Type":"ContainerStarted","Data":"7159a82b753e9850cb63bff88d97c6e1ade82629304fe20a1011bf3aee2de73b"} Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.023154 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l8wxr" Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.030390 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj" podStartSLOduration=126.030362729 podStartE2EDuration="2m6.030362729s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:07.027413377 +0000 UTC m=+146.782151740" watchObservedRunningTime="2026-01-31 14:57:07.030362729 +0000 UTC m=+146.785101062" Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.034676 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-852vg" event={"ID":"80834dac-7e21-4dda-8f32-3a19eced5753","Type":"ContainerStarted","Data":"5e181f6da09fdf74a70e41b344a2b9b02b08025edaf9d734ed2b557ab70a6604"} Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.035544 4763 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-8pcvn container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.035609 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" podUID="275ea46d-7a78-4457-a5ba-7b3000170d0e" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.035900 4763 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-f7fgc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.035966 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" podUID="84d11c6f-169b-4e21-87ec-8bb8930a1831" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.036314 4763 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-flcgf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.036367 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" podUID="c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.036931 4763 patch_prober.go:28] interesting pod/downloads-7954f5f757-bh727 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.036974 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bh727" podUID="db0aea6c-f6f8-4548-905b-22d810b334d4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.040044 4763 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-5zjq4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.040804 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" podUID="229488e3-89a8-4eb4-841e-980db3f8cfb3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.054269 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-gxcjc" podStartSLOduration=126.054240265 podStartE2EDuration="2m6.054240265s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:07.043041025 +0000 UTC m=+146.797779338" watchObservedRunningTime="2026-01-31 14:57:07.054240265 +0000 UTC m=+146.808978598" Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.067786 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l8wxr" podStartSLOduration=126.067757947 podStartE2EDuration="2m6.067757947s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:07.064947249 +0000 UTC m=+146.819685602" watchObservedRunningTime="2026-01-31 14:57:07.067757947 +0000 UTC m=+146.822496270" Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.070292 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:07 crc kubenswrapper[4763]: E0131 14:57:07.070404 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:07.570388879 +0000 UTC m=+147.325127172 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.071530 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:07 crc kubenswrapper[4763]: E0131 14:57:07.074651 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:07.574633822 +0000 UTC m=+147.329372125 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.075575 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l" Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.082333 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.083637 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-852vg" podStartSLOduration=126.083617562 podStartE2EDuration="2m6.083617562s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:07.082092904 +0000 UTC m=+146.836831207" watchObservedRunningTime="2026-01-31 14:57:07.083617562 +0000 UTC m=+146.838355855" Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.084104 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.084223 4763 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-nwfnl container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.10:8443/livez\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.084267 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" podUID="ac92922f-89ed-41e7-bf6f-9750efc9cab0" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.10:8443/livez\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.173006 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:07 crc kubenswrapper[4763]: E0131 14:57:07.173201 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:07.673175778 +0000 UTC m=+147.427914071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.173330 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:07 crc kubenswrapper[4763]: E0131 14:57:07.173673 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:07.673661454 +0000 UTC m=+147.428399747 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.274858 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:07 crc kubenswrapper[4763]: E0131 14:57:07.275082 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:07.775049329 +0000 UTC m=+147.529787642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.275200 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:07 crc kubenswrapper[4763]: E0131 14:57:07.275504 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:07.775489572 +0000 UTC m=+147.530227875 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.377388 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:07 crc kubenswrapper[4763]: E0131 14:57:07.377550 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:07.877515468 +0000 UTC m=+147.632253791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.378465 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:07 crc kubenswrapper[4763]: E0131 14:57:07.378984 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:07.878955983 +0000 UTC m=+147.633694306 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.483997 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:07 crc kubenswrapper[4763]: E0131 14:57:07.484431 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:07.984413716 +0000 UTC m=+147.739152019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.593592 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:07 crc kubenswrapper[4763]: E0131 14:57:07.595974 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:08.095954618 +0000 UTC m=+147.850692921 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.696561 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:07 crc kubenswrapper[4763]: E0131 14:57:07.696932 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:08.19691345 +0000 UTC m=+147.951651753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.696995 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:07 crc kubenswrapper[4763]: E0131 14:57:07.697307 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:08.197298303 +0000 UTC m=+147.952036606 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.800682 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:07 crc kubenswrapper[4763]: E0131 14:57:07.800861 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:08.300831625 +0000 UTC m=+148.055569918 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.801008 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:07 crc kubenswrapper[4763]: E0131 14:57:07.801352 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:08.301340901 +0000 UTC m=+148.056079194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.886400 4763 patch_prober.go:28] interesting pod/router-default-5444994796-87f9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:57:07 crc kubenswrapper[4763]: [-]has-synced failed: reason withheld Jan 31 14:57:07 crc kubenswrapper[4763]: [+]process-running ok Jan 31 14:57:07 crc kubenswrapper[4763]: healthz check failed Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.886457 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87f9c" podUID="47ac991e-3a26-4da1-9cf0-6f0944a3bf7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.902016 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:07 crc kubenswrapper[4763]: E0131 14:57:07.902200 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:08.402175439 +0000 UTC m=+148.156913732 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.902283 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:07 crc kubenswrapper[4763]: E0131 14:57:07.902586 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:08.402576921 +0000 UTC m=+148.157315314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.999329 4763 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-5zjq4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.999386 4763 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-5zjq4 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.999438 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" podUID="229488e3-89a8-4eb4-841e-980db3f8cfb3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.999387 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" podUID="229488e3-89a8-4eb4-841e-980db3f8cfb3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.003128 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:08 crc kubenswrapper[4763]: E0131 14:57:08.003385 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:08.503349058 +0000 UTC m=+148.258087361 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.003464 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.003495 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.003531 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.003557 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:57:08 crc kubenswrapper[4763]: E0131 14:57:08.003887 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:08.503877445 +0000 UTC m=+148.258615738 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.012553 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.018744 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.019492 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.041783 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-l8kn4" event={"ID":"1113d5ad-40c9-412f-92c2-2fb0d6ec2903","Type":"ContainerStarted","Data":"c23f7930b2a127978fd2f7d73e226c0347d2c69ae39fe7fe4658ddd996198d69"} Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.041888 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-l8kn4" Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.043600 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" event={"ID":"330d3fd9-790f-406d-a122-152a1ab07e5c","Type":"ContainerStarted","Data":"7243cb12415fcd1c0a6c4f36767fe7bdcd431b37d26d6d60cf2dd182417f6a09"} Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.044364 4763 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-5zjq4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.044399 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" podUID="229488e3-89a8-4eb4-841e-980db3f8cfb3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.044568 4763 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4fhtj container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.044594 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj" podUID="301a24de-a6b1-45a1-a12d-663325e45fd6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.051127 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.059423 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-l8kn4" podStartSLOduration=9.059408688 podStartE2EDuration="9.059408688s" podCreationTimestamp="2026-01-31 14:56:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:08.058056756 +0000 UTC m=+147.812795049" watchObservedRunningTime="2026-01-31 14:57:08.059408688 +0000 UTC m=+147.814146971" Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.071805 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.085361 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.108869 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.109348 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:57:08 crc kubenswrapper[4763]: E0131 14:57:08.110167 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:08.610151023 +0000 UTC m=+148.364889316 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.123563 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.130875 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" podStartSLOduration=128.130858969 podStartE2EDuration="2m8.130858969s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:08.129837957 +0000 UTC m=+147.884576260" watchObservedRunningTime="2026-01-31 14:57:08.130858969 +0000 UTC m=+147.885597272" Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.211523 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:08 crc kubenswrapper[4763]: E0131 14:57:08.211947 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:08.71192749 +0000 UTC m=+148.466665773 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.313972 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:08 crc kubenswrapper[4763]: E0131 14:57:08.314300 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:08.814282195 +0000 UTC m=+148.569020488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.365418 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.415672 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:08 crc kubenswrapper[4763]: E0131 14:57:08.417096 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:08.917077895 +0000 UTC m=+148.671816188 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.517367 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:08 crc kubenswrapper[4763]: E0131 14:57:08.517682 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:09.017666576 +0000 UTC m=+148.772404869 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:08 crc kubenswrapper[4763]: W0131 14:57:08.545908 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-ad96203a09d297b43e1fc4986819a7e20491515093b726af6deba113ecd7397e WatchSource:0}: Error finding container ad96203a09d297b43e1fc4986819a7e20491515093b726af6deba113ecd7397e: Status 404 returned error can't find the container with id ad96203a09d297b43e1fc4986819a7e20491515093b726af6deba113ecd7397e Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.618425 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:08 crc kubenswrapper[4763]: E0131 14:57:08.618825 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:09.118810354 +0000 UTC m=+148.873548657 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.719771 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:08 crc kubenswrapper[4763]: E0131 14:57:08.720136 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:09.220121556 +0000 UTC m=+148.974859849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.821582 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:08 crc kubenswrapper[4763]: E0131 14:57:08.821981 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:09.321965186 +0000 UTC m=+149.076703479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.885852 4763 patch_prober.go:28] interesting pod/router-default-5444994796-87f9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:57:08 crc kubenswrapper[4763]: [-]has-synced failed: reason withheld Jan 31 14:57:08 crc kubenswrapper[4763]: [+]process-running ok Jan 31 14:57:08 crc kubenswrapper[4763]: healthz check failed Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.885906 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87f9c" podUID="47ac991e-3a26-4da1-9cf0-6f0944a3bf7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.923115 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:08 crc kubenswrapper[4763]: E0131 14:57:08.923388 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:09.423372462 +0000 UTC m=+149.178110745 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.024283 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:09 crc kubenswrapper[4763]: E0131 14:57:09.024562 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:09.524550702 +0000 UTC m=+149.279288995 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.044185 4763 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-f7fgc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.044248 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" podUID="84d11c6f-169b-4e21-87ec-8bb8930a1831" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.086923 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e49920ad6b55837fe5419ae6a0abfcef0e7f59c2433444f4113eec2ac2726755"} Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.096239 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" event={"ID":"d73a5142-56cf-4676-a6f1-a00868938c4d","Type":"ContainerStarted","Data":"be4828a0830ba7273db1a424ab41466f911c98d49e719ac495916e7ffc4ef017"} Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.105979 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"842f4d39e3f908ad742c890ec35eea76f854bc8df8197212137ef41d421444de"} Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.106028 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ad96203a09d297b43e1fc4986819a7e20491515093b726af6deba113ecd7397e"} Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.125223 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:09 crc kubenswrapper[4763]: E0131 14:57:09.125531 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:09.625517314 +0000 UTC m=+149.380255607 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.137518 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"4806868c4c8273ef6c9fddceefbc84aea9c3e9b1854132c4349aa8e4ec08bbf9"} Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.227109 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:09 crc kubenswrapper[4763]: E0131 14:57:09.228374 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:09.728330394 +0000 UTC m=+149.483068687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.330964 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:09 crc kubenswrapper[4763]: E0131 14:57:09.331088 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:09.831069032 +0000 UTC m=+149.585807325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.331238 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:09 crc kubenswrapper[4763]: E0131 14:57:09.331494 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:09.831485955 +0000 UTC m=+149.586224248 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.432422 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:09 crc kubenswrapper[4763]: E0131 14:57:09.432568 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:09.93254606 +0000 UTC m=+149.687284353 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.432600 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:09 crc kubenswrapper[4763]: E0131 14:57:09.432890 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:09.93288227 +0000 UTC m=+149.687620563 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.533949 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:09 crc kubenswrapper[4763]: E0131 14:57:09.534137 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:10.034109301 +0000 UTC m=+149.788847594 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.534269 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:09 crc kubenswrapper[4763]: E0131 14:57:09.534625 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:10.034613317 +0000 UTC m=+149.789351670 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.635845 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:09 crc kubenswrapper[4763]: E0131 14:57:09.636031 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:10.136005872 +0000 UTC m=+149.890744165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.636373 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:09 crc kubenswrapper[4763]: E0131 14:57:09.636683 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:10.136674583 +0000 UTC m=+149.891412876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.691845 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.692405 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.694303 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.694724 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.704118 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.737374 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.737518 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49e5646c-e6be-4d7c-9839-540a69daf0e9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"49e5646c-e6be-4d7c-9839-540a69daf0e9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.737542 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49e5646c-e6be-4d7c-9839-540a69daf0e9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"49e5646c-e6be-4d7c-9839-540a69daf0e9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 14:57:09 crc kubenswrapper[4763]: E0131 14:57:09.737623 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:10.237596794 +0000 UTC m=+149.992335087 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.737954 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:09 crc kubenswrapper[4763]: E0131 14:57:09.738307 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:10.238298926 +0000 UTC m=+149.993037219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.838662 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:09 crc kubenswrapper[4763]: E0131 14:57:09.838824 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:10.338798274 +0000 UTC m=+150.093536557 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.838909 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.838939 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49e5646c-e6be-4d7c-9839-540a69daf0e9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"49e5646c-e6be-4d7c-9839-540a69daf0e9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.838958 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49e5646c-e6be-4d7c-9839-540a69daf0e9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"49e5646c-e6be-4d7c-9839-540a69daf0e9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.839068 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49e5646c-e6be-4d7c-9839-540a69daf0e9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"49e5646c-e6be-4d7c-9839-540a69daf0e9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 14:57:09 crc kubenswrapper[4763]: E0131 14:57:09.839261 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:10.339253518 +0000 UTC m=+150.093991811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.862443 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49e5646c-e6be-4d7c-9839-540a69daf0e9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"49e5646c-e6be-4d7c-9839-540a69daf0e9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.888252 4763 patch_prober.go:28] interesting pod/router-default-5444994796-87f9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:57:09 crc kubenswrapper[4763]: [-]has-synced failed: reason withheld Jan 31 14:57:09 crc kubenswrapper[4763]: [+]process-running ok Jan 31 14:57:09 crc kubenswrapper[4763]: healthz check failed Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.888329 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87f9c" podUID="47ac991e-3a26-4da1-9cf0-6f0944a3bf7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.940545 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:09 crc kubenswrapper[4763]: E0131 14:57:09.940729 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:10.440687975 +0000 UTC m=+150.195426288 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.940893 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:09 crc kubenswrapper[4763]: E0131 14:57:09.941231 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:10.441221071 +0000 UTC m=+150.195959364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.008714 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.042164 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:10 crc kubenswrapper[4763]: E0131 14:57:10.042493 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:10.542479733 +0000 UTC m=+150.297218026 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.143379 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:10 crc kubenswrapper[4763]: E0131 14:57:10.143904 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:10.643887739 +0000 UTC m=+150.398626032 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.146925 4763 generic.go:334] "Generic (PLEG): container finished" podID="20c40a34-73d2-4a28-b2bd-31e19e6361d2" containerID="f8f3a2ee5fed8706cd33e083136df0eff635e736dd9b8ba1a9267757cea26ad5" exitCode=0 Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.147031 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989" event={"ID":"20c40a34-73d2-4a28-b2bd-31e19e6361d2","Type":"ContainerDied","Data":"f8f3a2ee5fed8706cd33e083136df0eff635e736dd9b8ba1a9267757cea26ad5"} Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.148117 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"1f811be19ca53a5eda82c567dd9bc8aa76845bc4c4c6a6582f66148ff0509f86"} Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.159661 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"5c8e361e95ea073b25fd5b3ea16920e9769595ab3cc814b162ab859a83be58d2"} Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.160362 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.179452 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" event={"ID":"d73a5142-56cf-4676-a6f1-a00868938c4d","Type":"ContainerStarted","Data":"06b21accff0c26d9d7dc03c0bc3ef51d7353a0422734ec9b7f8e2fd5290a1778"} Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.211415 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.244232 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:10 crc kubenswrapper[4763]: E0131 14:57:10.245031 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:10.745017126 +0000 UTC m=+150.499755419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.314449 4763 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.345414 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:10 crc kubenswrapper[4763]: E0131 14:57:10.345890 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:10.845867075 +0000 UTC m=+150.600605438 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.446640 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:10 crc kubenswrapper[4763]: E0131 14:57:10.446876 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:10.946845058 +0000 UTC m=+150.701583351 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.447021 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:10 crc kubenswrapper[4763]: E0131 14:57:10.447464 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:10.947451287 +0000 UTC m=+150.702189580 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.548383 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:10 crc kubenswrapper[4763]: E0131 14:57:10.548801 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:11.04877718 +0000 UTC m=+150.803515473 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.613939 4763 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-31T14:57:10.314662611Z","Handler":null,"Name":""} Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.620970 4763 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.621002 4763 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.650418 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.652713 4763 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.652748 4763 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.678316 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.751542 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.758893 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.886206 4763 patch_prober.go:28] interesting pod/router-default-5444994796-87f9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:57:10 crc kubenswrapper[4763]: [-]has-synced failed: reason withheld Jan 31 14:57:10 crc kubenswrapper[4763]: [+]process-running ok Jan 31 14:57:10 crc kubenswrapper[4763]: healthz check failed Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.886260 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87f9c" podUID="47ac991e-3a26-4da1-9cf0-6f0944a3bf7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.896226 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.988306 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k6ddv"] Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.990074 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k6ddv" Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.997216 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.010108 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k6ddv"] Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.011471 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.056403 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8a35a73-67a0-4bb4-9954-46350d31b017-utilities\") pod \"certified-operators-k6ddv\" (UID: \"b8a35a73-67a0-4bb4-9954-46350d31b017\") " pod="openshift-marketplace/certified-operators-k6ddv" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.056507 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk9ln\" (UniqueName: \"kubernetes.io/projected/b8a35a73-67a0-4bb4-9954-46350d31b017-kube-api-access-vk9ln\") pod \"certified-operators-k6ddv\" (UID: \"b8a35a73-67a0-4bb4-9954-46350d31b017\") " pod="openshift-marketplace/certified-operators-k6ddv" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.056643 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8a35a73-67a0-4bb4-9954-46350d31b017-catalog-content\") pod \"certified-operators-k6ddv\" (UID: \"b8a35a73-67a0-4bb4-9954-46350d31b017\") " pod="openshift-marketplace/certified-operators-k6ddv" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.070074 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.152269 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dzr7c"] Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.157754 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8a35a73-67a0-4bb4-9954-46350d31b017-catalog-content\") pod \"certified-operators-k6ddv\" (UID: \"b8a35a73-67a0-4bb4-9954-46350d31b017\") " pod="openshift-marketplace/certified-operators-k6ddv" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.157805 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8a35a73-67a0-4bb4-9954-46350d31b017-utilities\") pod \"certified-operators-k6ddv\" (UID: \"b8a35a73-67a0-4bb4-9954-46350d31b017\") " pod="openshift-marketplace/certified-operators-k6ddv" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.157836 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk9ln\" (UniqueName: \"kubernetes.io/projected/b8a35a73-67a0-4bb4-9954-46350d31b017-kube-api-access-vk9ln\") pod \"certified-operators-k6ddv\" (UID: \"b8a35a73-67a0-4bb4-9954-46350d31b017\") " pod="openshift-marketplace/certified-operators-k6ddv" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.158428 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8a35a73-67a0-4bb4-9954-46350d31b017-catalog-content\") pod \"certified-operators-k6ddv\" (UID: \"b8a35a73-67a0-4bb4-9954-46350d31b017\") " pod="openshift-marketplace/certified-operators-k6ddv" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.158627 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8a35a73-67a0-4bb4-9954-46350d31b017-utilities\") pod \"certified-operators-k6ddv\" (UID: \"b8a35a73-67a0-4bb4-9954-46350d31b017\") " pod="openshift-marketplace/certified-operators-k6ddv" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.184065 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk9ln\" (UniqueName: \"kubernetes.io/projected/b8a35a73-67a0-4bb4-9954-46350d31b017-kube-api-access-vk9ln\") pod \"certified-operators-k6ddv\" (UID: \"b8a35a73-67a0-4bb4-9954-46350d31b017\") " pod="openshift-marketplace/certified-operators-k6ddv" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.184890 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9df4p"] Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.186992 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9df4p" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.191036 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.193460 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9df4p"] Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.193508 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"49e5646c-e6be-4d7c-9839-540a69daf0e9","Type":"ContainerStarted","Data":"1c3251daea8c98cbe723a2aab08e84229fbcd9000a33f6fdb2dfc2f339c11130"} Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.193531 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"49e5646c-e6be-4d7c-9839-540a69daf0e9","Type":"ContainerStarted","Data":"a0bc2a82642ee311d6552a5e94a028bf0431b120d860d1f88eae5e6a24094925"} Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.194972 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" event={"ID":"329bb364-3958-490e-b065-d2ce7ee1567d","Type":"ContainerStarted","Data":"f6421d1d39f19dfe9997df0c879a0f9ff7802342de47df550a2b31d059ccd341"} Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.214638 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" event={"ID":"d73a5142-56cf-4676-a6f1-a00868938c4d","Type":"ContainerStarted","Data":"1736498c7cf56f7eafa34020b6b340b116143234e02c496c789677a4f788cf2f"} Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.214685 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" event={"ID":"d73a5142-56cf-4676-a6f1-a00868938c4d","Type":"ContainerStarted","Data":"981572eb6b8d330cda41bf41133a89d23b6eb9a0cdf02e907c24827ab4badccd"} Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.218497 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.2184831799999998 podStartE2EDuration="2.21848318s" podCreationTimestamp="2026-01-31 14:57:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:11.217664664 +0000 UTC m=+150.972402957" watchObservedRunningTime="2026-01-31 14:57:11.21848318 +0000 UTC m=+150.973221473" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.237122 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" podStartSLOduration=12.237108062 podStartE2EDuration="12.237108062s" podCreationTimestamp="2026-01-31 14:56:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:11.234894862 +0000 UTC m=+150.989633175" watchObservedRunningTime="2026-01-31 14:57:11.237108062 +0000 UTC m=+150.991846355" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.259159 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrvp2\" (UniqueName: \"kubernetes.io/projected/5c097873-7ca4-491d-86c4-31b2ab99d63d-kube-api-access-rrvp2\") pod \"community-operators-9df4p\" (UID: \"5c097873-7ca4-491d-86c4-31b2ab99d63d\") " pod="openshift-marketplace/community-operators-9df4p" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.259417 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c097873-7ca4-491d-86c4-31b2ab99d63d-catalog-content\") pod \"community-operators-9df4p\" (UID: \"5c097873-7ca4-491d-86c4-31b2ab99d63d\") " pod="openshift-marketplace/community-operators-9df4p" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.259560 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c097873-7ca4-491d-86c4-31b2ab99d63d-utilities\") pod \"community-operators-9df4p\" (UID: \"5c097873-7ca4-491d-86c4-31b2ab99d63d\") " pod="openshift-marketplace/community-operators-9df4p" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.323052 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k6ddv" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.361400 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c097873-7ca4-491d-86c4-31b2ab99d63d-utilities\") pod \"community-operators-9df4p\" (UID: \"5c097873-7ca4-491d-86c4-31b2ab99d63d\") " pod="openshift-marketplace/community-operators-9df4p" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.361585 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrvp2\" (UniqueName: \"kubernetes.io/projected/5c097873-7ca4-491d-86c4-31b2ab99d63d-kube-api-access-rrvp2\") pod \"community-operators-9df4p\" (UID: \"5c097873-7ca4-491d-86c4-31b2ab99d63d\") " pod="openshift-marketplace/community-operators-9df4p" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.361606 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c097873-7ca4-491d-86c4-31b2ab99d63d-catalog-content\") pod \"community-operators-9df4p\" (UID: \"5c097873-7ca4-491d-86c4-31b2ab99d63d\") " pod="openshift-marketplace/community-operators-9df4p" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.362390 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c097873-7ca4-491d-86c4-31b2ab99d63d-utilities\") pod \"community-operators-9df4p\" (UID: \"5c097873-7ca4-491d-86c4-31b2ab99d63d\") " pod="openshift-marketplace/community-operators-9df4p" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.363123 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c097873-7ca4-491d-86c4-31b2ab99d63d-catalog-content\") pod \"community-operators-9df4p\" (UID: \"5c097873-7ca4-491d-86c4-31b2ab99d63d\") " pod="openshift-marketplace/community-operators-9df4p" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.380547 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.380576 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.384462 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrvp2\" (UniqueName: \"kubernetes.io/projected/5c097873-7ca4-491d-86c4-31b2ab99d63d-kube-api-access-rrvp2\") pod \"community-operators-9df4p\" (UID: \"5c097873-7ca4-491d-86c4-31b2ab99d63d\") " pod="openshift-marketplace/community-operators-9df4p" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.385523 4763 patch_prober.go:28] interesting pod/console-f9d7485db-9lvgt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.387440 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-9lvgt" podUID="0cddc243-3a83-4398-87a9-7a111581bec5" containerName="console" probeResult="failure" output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.397304 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4m6qg"] Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.398240 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4m6qg" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.407477 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4m6qg"] Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.438742 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.468656 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a85c02e-9d6e-4d11-be81-242bf4fee8c4-catalog-content\") pod \"certified-operators-4m6qg\" (UID: \"5a85c02e-9d6e-4d11-be81-242bf4fee8c4\") " pod="openshift-marketplace/certified-operators-4m6qg" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.468984 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a85c02e-9d6e-4d11-be81-242bf4fee8c4-utilities\") pod \"certified-operators-4m6qg\" (UID: \"5a85c02e-9d6e-4d11-be81-242bf4fee8c4\") " pod="openshift-marketplace/certified-operators-4m6qg" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.469013 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq9nf\" (UniqueName: \"kubernetes.io/projected/5a85c02e-9d6e-4d11-be81-242bf4fee8c4-kube-api-access-sq9nf\") pod \"certified-operators-4m6qg\" (UID: \"5a85c02e-9d6e-4d11-be81-242bf4fee8c4\") " pod="openshift-marketplace/certified-operators-4m6qg" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.570594 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gq7w\" (UniqueName: \"kubernetes.io/projected/20c40a34-73d2-4a28-b2bd-31e19e6361d2-kube-api-access-4gq7w\") pod \"20c40a34-73d2-4a28-b2bd-31e19e6361d2\" (UID: \"20c40a34-73d2-4a28-b2bd-31e19e6361d2\") " Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.570658 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20c40a34-73d2-4a28-b2bd-31e19e6361d2-secret-volume\") pod \"20c40a34-73d2-4a28-b2bd-31e19e6361d2\" (UID: \"20c40a34-73d2-4a28-b2bd-31e19e6361d2\") " Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.570780 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20c40a34-73d2-4a28-b2bd-31e19e6361d2-config-volume\") pod \"20c40a34-73d2-4a28-b2bd-31e19e6361d2\" (UID: \"20c40a34-73d2-4a28-b2bd-31e19e6361d2\") " Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.570920 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a85c02e-9d6e-4d11-be81-242bf4fee8c4-catalog-content\") pod \"certified-operators-4m6qg\" (UID: \"5a85c02e-9d6e-4d11-be81-242bf4fee8c4\") " pod="openshift-marketplace/certified-operators-4m6qg" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.570963 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a85c02e-9d6e-4d11-be81-242bf4fee8c4-utilities\") pod \"certified-operators-4m6qg\" (UID: \"5a85c02e-9d6e-4d11-be81-242bf4fee8c4\") " pod="openshift-marketplace/certified-operators-4m6qg" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.570992 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq9nf\" (UniqueName: \"kubernetes.io/projected/5a85c02e-9d6e-4d11-be81-242bf4fee8c4-kube-api-access-sq9nf\") pod \"certified-operators-4m6qg\" (UID: \"5a85c02e-9d6e-4d11-be81-242bf4fee8c4\") " pod="openshift-marketplace/certified-operators-4m6qg" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.571833 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a85c02e-9d6e-4d11-be81-242bf4fee8c4-catalog-content\") pod \"certified-operators-4m6qg\" (UID: \"5a85c02e-9d6e-4d11-be81-242bf4fee8c4\") " pod="openshift-marketplace/certified-operators-4m6qg" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.572054 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a85c02e-9d6e-4d11-be81-242bf4fee8c4-utilities\") pod \"certified-operators-4m6qg\" (UID: \"5a85c02e-9d6e-4d11-be81-242bf4fee8c4\") " pod="openshift-marketplace/certified-operators-4m6qg" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.572058 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20c40a34-73d2-4a28-b2bd-31e19e6361d2-config-volume" (OuterVolumeSpecName: "config-volume") pod "20c40a34-73d2-4a28-b2bd-31e19e6361d2" (UID: "20c40a34-73d2-4a28-b2bd-31e19e6361d2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.578220 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20c40a34-73d2-4a28-b2bd-31e19e6361d2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "20c40a34-73d2-4a28-b2bd-31e19e6361d2" (UID: "20c40a34-73d2-4a28-b2bd-31e19e6361d2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.583126 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20c40a34-73d2-4a28-b2bd-31e19e6361d2-kube-api-access-4gq7w" (OuterVolumeSpecName: "kube-api-access-4gq7w") pod "20c40a34-73d2-4a28-b2bd-31e19e6361d2" (UID: "20c40a34-73d2-4a28-b2bd-31e19e6361d2"). InnerVolumeSpecName "kube-api-access-4gq7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.587456 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mr7l4"] Jan 31 14:57:11 crc kubenswrapper[4763]: E0131 14:57:11.587632 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20c40a34-73d2-4a28-b2bd-31e19e6361d2" containerName="collect-profiles" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.587642 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="20c40a34-73d2-4a28-b2bd-31e19e6361d2" containerName="collect-profiles" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.587761 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="20c40a34-73d2-4a28-b2bd-31e19e6361d2" containerName="collect-profiles" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.588365 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mr7l4" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.593009 4763 patch_prober.go:28] interesting pod/downloads-7954f5f757-bh727 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.593045 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bh727" podUID="db0aea6c-f6f8-4548-905b-22d810b334d4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.593106 4763 patch_prober.go:28] interesting pod/downloads-7954f5f757-bh727 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.593144 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bh727" podUID="db0aea6c-f6f8-4548-905b-22d810b334d4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.598187 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq9nf\" (UniqueName: \"kubernetes.io/projected/5a85c02e-9d6e-4d11-be81-242bf4fee8c4-kube-api-access-sq9nf\") pod \"certified-operators-4m6qg\" (UID: \"5a85c02e-9d6e-4d11-be81-242bf4fee8c4\") " pod="openshift-marketplace/certified-operators-4m6qg" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.600936 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mr7l4"] Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.647833 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k6ddv"] Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.671955 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa-catalog-content\") pod \"community-operators-mr7l4\" (UID: \"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa\") " pod="openshift-marketplace/community-operators-mr7l4" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.672007 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa-utilities\") pod \"community-operators-mr7l4\" (UID: \"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa\") " pod="openshift-marketplace/community-operators-mr7l4" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.672031 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4rwh\" (UniqueName: \"kubernetes.io/projected/f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa-kube-api-access-m4rwh\") pod \"community-operators-mr7l4\" (UID: \"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa\") " pod="openshift-marketplace/community-operators-mr7l4" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.672061 4763 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20c40a34-73d2-4a28-b2bd-31e19e6361d2-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.672073 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gq7w\" (UniqueName: \"kubernetes.io/projected/20c40a34-73d2-4a28-b2bd-31e19e6361d2-kube-api-access-4gq7w\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.672082 4763 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20c40a34-73d2-4a28-b2bd-31e19e6361d2-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.683358 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9df4p" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.713942 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4m6qg" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.772941 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa-catalog-content\") pod \"community-operators-mr7l4\" (UID: \"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa\") " pod="openshift-marketplace/community-operators-mr7l4" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.772995 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa-utilities\") pod \"community-operators-mr7l4\" (UID: \"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa\") " pod="openshift-marketplace/community-operators-mr7l4" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.773019 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4rwh\" (UniqueName: \"kubernetes.io/projected/f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa-kube-api-access-m4rwh\") pod \"community-operators-mr7l4\" (UID: \"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa\") " pod="openshift-marketplace/community-operators-mr7l4" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.773565 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa-catalog-content\") pod \"community-operators-mr7l4\" (UID: \"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa\") " pod="openshift-marketplace/community-operators-mr7l4" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.773850 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa-utilities\") pod \"community-operators-mr7l4\" (UID: \"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa\") " pod="openshift-marketplace/community-operators-mr7l4" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.800706 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4rwh\" (UniqueName: \"kubernetes.io/projected/f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa-kube-api-access-m4rwh\") pod \"community-operators-mr7l4\" (UID: \"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa\") " pod="openshift-marketplace/community-operators-mr7l4" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.841090 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-mjbd9" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.884889 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.888219 4763 patch_prober.go:28] interesting pod/router-default-5444994796-87f9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:57:11 crc kubenswrapper[4763]: [-]has-synced failed: reason withheld Jan 31 14:57:11 crc kubenswrapper[4763]: [+]process-running ok Jan 31 14:57:11 crc kubenswrapper[4763]: healthz check failed Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.888273 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87f9c" podUID="47ac991e-3a26-4da1-9cf0-6f0944a3bf7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.908734 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mr7l4" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.944084 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9df4p"] Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.999668 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4m6qg"] Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.006609 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.006657 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.017873 4763 patch_prober.go:28] interesting pod/apiserver-76f77b778f-tv9s8 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 31 14:57:12 crc kubenswrapper[4763]: [+]log ok Jan 31 14:57:12 crc kubenswrapper[4763]: [+]etcd ok Jan 31 14:57:12 crc kubenswrapper[4763]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 31 14:57:12 crc kubenswrapper[4763]: [+]poststarthook/generic-apiserver-start-informers ok Jan 31 14:57:12 crc kubenswrapper[4763]: [+]poststarthook/max-in-flight-filter ok Jan 31 14:57:12 crc kubenswrapper[4763]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 31 14:57:12 crc kubenswrapper[4763]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 31 14:57:12 crc kubenswrapper[4763]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 31 14:57:12 crc kubenswrapper[4763]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 31 14:57:12 crc kubenswrapper[4763]: [+]poststarthook/project.openshift.io-projectcache ok Jan 31 14:57:12 crc kubenswrapper[4763]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 31 14:57:12 crc kubenswrapper[4763]: [+]poststarthook/openshift.io-startinformers ok Jan 31 14:57:12 crc kubenswrapper[4763]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 31 14:57:12 crc kubenswrapper[4763]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 31 14:57:12 crc kubenswrapper[4763]: livez check failed Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.017929 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" podUID="330d3fd9-790f-406d-a122-152a1ab07e5c" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.093761 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.108146 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.227650 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" event={"ID":"329bb364-3958-490e-b065-d2ce7ee1567d","Type":"ContainerStarted","Data":"361b73b8a392606d46b72682540045b4df3a4bf83b068a8dab1993dc20255545"} Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.227859 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.232378 4763 generic.go:334] "Generic (PLEG): container finished" podID="b8a35a73-67a0-4bb4-9954-46350d31b017" containerID="f819b8c865363bd5ff66e2069c6d5174ad8675d68262ee17b08a94b17937547d" exitCode=0 Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.232517 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6ddv" event={"ID":"b8a35a73-67a0-4bb4-9954-46350d31b017","Type":"ContainerDied","Data":"f819b8c865363bd5ff66e2069c6d5174ad8675d68262ee17b08a94b17937547d"} Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.232549 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6ddv" event={"ID":"b8a35a73-67a0-4bb4-9954-46350d31b017","Type":"ContainerStarted","Data":"6f670464716ecf8ab5d99a2382a3bcaf7162a13bd03fa816cb2c7b4734ade299"} Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.242006 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mr7l4"] Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.244910 4763 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.245455 4763 generic.go:334] "Generic (PLEG): container finished" podID="5a85c02e-9d6e-4d11-be81-242bf4fee8c4" containerID="020b220c679a66fe9c3c2990addb68ddfb9cb55888df25b0768fff8146c738ba" exitCode=0 Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.247006 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4m6qg" event={"ID":"5a85c02e-9d6e-4d11-be81-242bf4fee8c4","Type":"ContainerDied","Data":"020b220c679a66fe9c3c2990addb68ddfb9cb55888df25b0768fff8146c738ba"} Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.247574 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4m6qg" event={"ID":"5a85c02e-9d6e-4d11-be81-242bf4fee8c4","Type":"ContainerStarted","Data":"9c7dca8f63ce2a8f4eb26e014c54162f55c4578fef6f425b844a6c85dc4561db"} Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.249172 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" podStartSLOduration=131.249153459 podStartE2EDuration="2m11.249153459s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:12.246811576 +0000 UTC m=+152.001549869" watchObservedRunningTime="2026-01-31 14:57:12.249153459 +0000 UTC m=+152.003891752" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.260758 4763 generic.go:334] "Generic (PLEG): container finished" podID="49e5646c-e6be-4d7c-9839-540a69daf0e9" containerID="1c3251daea8c98cbe723a2aab08e84229fbcd9000a33f6fdb2dfc2f339c11130" exitCode=0 Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.260819 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"49e5646c-e6be-4d7c-9839-540a69daf0e9","Type":"ContainerDied","Data":"1c3251daea8c98cbe723a2aab08e84229fbcd9000a33f6fdb2dfc2f339c11130"} Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.265521 4763 generic.go:334] "Generic (PLEG): container finished" podID="5c097873-7ca4-491d-86c4-31b2ab99d63d" containerID="8e4ac171d8102521507f2ca735b9c180604f975fcecb70e2eebf9141808b96cc" exitCode=0 Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.265586 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9df4p" event={"ID":"5c097873-7ca4-491d-86c4-31b2ab99d63d","Type":"ContainerDied","Data":"8e4ac171d8102521507f2ca735b9c180604f975fcecb70e2eebf9141808b96cc"} Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.265608 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9df4p" event={"ID":"5c097873-7ca4-491d-86c4-31b2ab99d63d","Type":"ContainerStarted","Data":"1eaa5c467faffeb2ba7ad8dc241225ca0c8240c2cf3a8e19cde7c5ee1bfecc47"} Jan 31 14:57:12 crc kubenswrapper[4763]: W0131 14:57:12.269325 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3a48e2e_604d_4cd9_b0c8_a290f4a81ffa.slice/crio-29747698a85a5ba3ecc912242e839ad570821011bfaa89976cf716705b0e4372 WatchSource:0}: Error finding container 29747698a85a5ba3ecc912242e839ad570821011bfaa89976cf716705b0e4372: Status 404 returned error can't find the container with id 29747698a85a5ba3ecc912242e839ad570821011bfaa89976cf716705b0e4372 Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.270297 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.275762 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989" event={"ID":"20c40a34-73d2-4a28-b2bd-31e19e6361d2","Type":"ContainerDied","Data":"1d970cab673873dbd0ffb836dc82303a0a52ef1bbbe7617fdf76121fb69fc8f6"} Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.275825 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d970cab673873dbd0ffb836dc82303a0a52ef1bbbe7617fdf76121fb69fc8f6" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.283239 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.299334 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.325560 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.336955 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.337559 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.339678 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.341862 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.342131 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.380339 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e7ac98e-5206-404b-a648-3ef3d778619c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5e7ac98e-5206-404b-a648-3ef3d778619c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.380443 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e7ac98e-5206-404b-a648-3ef3d778619c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5e7ac98e-5206-404b-a648-3ef3d778619c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.483734 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e7ac98e-5206-404b-a648-3ef3d778619c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5e7ac98e-5206-404b-a648-3ef3d778619c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.484543 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e7ac98e-5206-404b-a648-3ef3d778619c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5e7ac98e-5206-404b-a648-3ef3d778619c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.484621 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e7ac98e-5206-404b-a648-3ef3d778619c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5e7ac98e-5206-404b-a648-3ef3d778619c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.519424 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e7ac98e-5206-404b-a648-3ef3d778619c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5e7ac98e-5206-404b-a648-3ef3d778619c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.722019 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.904761 4763 patch_prober.go:28] interesting pod/router-default-5444994796-87f9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:57:12 crc kubenswrapper[4763]: [-]has-synced failed: reason withheld Jan 31 14:57:12 crc kubenswrapper[4763]: [+]process-running ok Jan 31 14:57:12 crc kubenswrapper[4763]: healthz check failed Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.905163 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87f9c" podUID="47ac991e-3a26-4da1-9cf0-6f0944a3bf7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.996892 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 14:57:13 crc kubenswrapper[4763]: W0131 14:57:13.010622 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5e7ac98e_5206_404b_a648_3ef3d778619c.slice/crio-ddb43162b5b5c5ccf5049c1ecc38ca80a82b1da0bd3e6319c633833964c09b50 WatchSource:0}: Error finding container ddb43162b5b5c5ccf5049c1ecc38ca80a82b1da0bd3e6319c633833964c09b50: Status 404 returned error can't find the container with id ddb43162b5b5c5ccf5049c1ecc38ca80a82b1da0bd3e6319c633833964c09b50 Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.182620 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wnmmq"] Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.183583 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wnmmq" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.185401 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.193185 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wnmmq"] Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.276119 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5e7ac98e-5206-404b-a648-3ef3d778619c","Type":"ContainerStarted","Data":"ddb43162b5b5c5ccf5049c1ecc38ca80a82b1da0bd3e6319c633833964c09b50"} Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.279175 4763 generic.go:334] "Generic (PLEG): container finished" podID="f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa" containerID="cbcf059643f243b97663d9030a999deafef368163de2196a0d497c8e7eabbc09" exitCode=0 Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.279291 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mr7l4" event={"ID":"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa","Type":"ContainerDied","Data":"cbcf059643f243b97663d9030a999deafef368163de2196a0d497c8e7eabbc09"} Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.279333 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mr7l4" event={"ID":"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa","Type":"ContainerStarted","Data":"29747698a85a5ba3ecc912242e839ad570821011bfaa89976cf716705b0e4372"} Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.293492 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2434f0b9-846a-444c-b487-745d4010002b-catalog-content\") pod \"redhat-marketplace-wnmmq\" (UID: \"2434f0b9-846a-444c-b487-745d4010002b\") " pod="openshift-marketplace/redhat-marketplace-wnmmq" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.293548 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzr25\" (UniqueName: \"kubernetes.io/projected/2434f0b9-846a-444c-b487-745d4010002b-kube-api-access-tzr25\") pod \"redhat-marketplace-wnmmq\" (UID: \"2434f0b9-846a-444c-b487-745d4010002b\") " pod="openshift-marketplace/redhat-marketplace-wnmmq" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.293589 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2434f0b9-846a-444c-b487-745d4010002b-utilities\") pod \"redhat-marketplace-wnmmq\" (UID: \"2434f0b9-846a-444c-b487-745d4010002b\") " pod="openshift-marketplace/redhat-marketplace-wnmmq" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.394875 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2434f0b9-846a-444c-b487-745d4010002b-utilities\") pod \"redhat-marketplace-wnmmq\" (UID: \"2434f0b9-846a-444c-b487-745d4010002b\") " pod="openshift-marketplace/redhat-marketplace-wnmmq" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.395052 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2434f0b9-846a-444c-b487-745d4010002b-catalog-content\") pod \"redhat-marketplace-wnmmq\" (UID: \"2434f0b9-846a-444c-b487-745d4010002b\") " pod="openshift-marketplace/redhat-marketplace-wnmmq" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.395103 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzr25\" (UniqueName: \"kubernetes.io/projected/2434f0b9-846a-444c-b487-745d4010002b-kube-api-access-tzr25\") pod \"redhat-marketplace-wnmmq\" (UID: \"2434f0b9-846a-444c-b487-745d4010002b\") " pod="openshift-marketplace/redhat-marketplace-wnmmq" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.397553 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2434f0b9-846a-444c-b487-745d4010002b-utilities\") pod \"redhat-marketplace-wnmmq\" (UID: \"2434f0b9-846a-444c-b487-745d4010002b\") " pod="openshift-marketplace/redhat-marketplace-wnmmq" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.398444 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2434f0b9-846a-444c-b487-745d4010002b-catalog-content\") pod \"redhat-marketplace-wnmmq\" (UID: \"2434f0b9-846a-444c-b487-745d4010002b\") " pod="openshift-marketplace/redhat-marketplace-wnmmq" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.436870 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzr25\" (UniqueName: \"kubernetes.io/projected/2434f0b9-846a-444c-b487-745d4010002b-kube-api-access-tzr25\") pod \"redhat-marketplace-wnmmq\" (UID: \"2434f0b9-846a-444c-b487-745d4010002b\") " pod="openshift-marketplace/redhat-marketplace-wnmmq" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.499285 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.515485 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wnmmq" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.578120 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-22bkm"] Jan 31 14:57:13 crc kubenswrapper[4763]: E0131 14:57:13.578590 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49e5646c-e6be-4d7c-9839-540a69daf0e9" containerName="pruner" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.578734 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="49e5646c-e6be-4d7c-9839-540a69daf0e9" containerName="pruner" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.578932 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="49e5646c-e6be-4d7c-9839-540a69daf0e9" containerName="pruner" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.579828 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-22bkm" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.592790 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-22bkm"] Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.596339 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49e5646c-e6be-4d7c-9839-540a69daf0e9-kubelet-dir\") pod \"49e5646c-e6be-4d7c-9839-540a69daf0e9\" (UID: \"49e5646c-e6be-4d7c-9839-540a69daf0e9\") " Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.596492 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49e5646c-e6be-4d7c-9839-540a69daf0e9-kube-api-access\") pod \"49e5646c-e6be-4d7c-9839-540a69daf0e9\" (UID: \"49e5646c-e6be-4d7c-9839-540a69daf0e9\") " Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.596809 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49e5646c-e6be-4d7c-9839-540a69daf0e9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "49e5646c-e6be-4d7c-9839-540a69daf0e9" (UID: "49e5646c-e6be-4d7c-9839-540a69daf0e9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.602288 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49e5646c-e6be-4d7c-9839-540a69daf0e9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "49e5646c-e6be-4d7c-9839-540a69daf0e9" (UID: "49e5646c-e6be-4d7c-9839-540a69daf0e9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.697599 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e24d612-62ed-4bd5-8e07-889710d16851-catalog-content\") pod \"redhat-marketplace-22bkm\" (UID: \"1e24d612-62ed-4bd5-8e07-889710d16851\") " pod="openshift-marketplace/redhat-marketplace-22bkm" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.697923 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e24d612-62ed-4bd5-8e07-889710d16851-utilities\") pod \"redhat-marketplace-22bkm\" (UID: \"1e24d612-62ed-4bd5-8e07-889710d16851\") " pod="openshift-marketplace/redhat-marketplace-22bkm" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.697994 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dshgn\" (UniqueName: \"kubernetes.io/projected/1e24d612-62ed-4bd5-8e07-889710d16851-kube-api-access-dshgn\") pod \"redhat-marketplace-22bkm\" (UID: \"1e24d612-62ed-4bd5-8e07-889710d16851\") " pod="openshift-marketplace/redhat-marketplace-22bkm" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.698040 4763 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49e5646c-e6be-4d7c-9839-540a69daf0e9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.698061 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49e5646c-e6be-4d7c-9839-540a69daf0e9-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.799002 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dshgn\" (UniqueName: \"kubernetes.io/projected/1e24d612-62ed-4bd5-8e07-889710d16851-kube-api-access-dshgn\") pod \"redhat-marketplace-22bkm\" (UID: \"1e24d612-62ed-4bd5-8e07-889710d16851\") " pod="openshift-marketplace/redhat-marketplace-22bkm" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.799064 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e24d612-62ed-4bd5-8e07-889710d16851-catalog-content\") pod \"redhat-marketplace-22bkm\" (UID: \"1e24d612-62ed-4bd5-8e07-889710d16851\") " pod="openshift-marketplace/redhat-marketplace-22bkm" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.799088 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e24d612-62ed-4bd5-8e07-889710d16851-utilities\") pod \"redhat-marketplace-22bkm\" (UID: \"1e24d612-62ed-4bd5-8e07-889710d16851\") " pod="openshift-marketplace/redhat-marketplace-22bkm" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.799610 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e24d612-62ed-4bd5-8e07-889710d16851-utilities\") pod \"redhat-marketplace-22bkm\" (UID: \"1e24d612-62ed-4bd5-8e07-889710d16851\") " pod="openshift-marketplace/redhat-marketplace-22bkm" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.800102 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e24d612-62ed-4bd5-8e07-889710d16851-catalog-content\") pod \"redhat-marketplace-22bkm\" (UID: \"1e24d612-62ed-4bd5-8e07-889710d16851\") " pod="openshift-marketplace/redhat-marketplace-22bkm" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.816284 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dshgn\" (UniqueName: \"kubernetes.io/projected/1e24d612-62ed-4bd5-8e07-889710d16851-kube-api-access-dshgn\") pod \"redhat-marketplace-22bkm\" (UID: \"1e24d612-62ed-4bd5-8e07-889710d16851\") " pod="openshift-marketplace/redhat-marketplace-22bkm" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.888462 4763 patch_prober.go:28] interesting pod/router-default-5444994796-87f9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:57:13 crc kubenswrapper[4763]: [-]has-synced failed: reason withheld Jan 31 14:57:13 crc kubenswrapper[4763]: [+]process-running ok Jan 31 14:57:13 crc kubenswrapper[4763]: healthz check failed Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.888665 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87f9c" podUID="47ac991e-3a26-4da1-9cf0-6f0944a3bf7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.903910 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-22bkm" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.983892 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wnmmq"] Jan 31 14:57:14 crc kubenswrapper[4763]: W0131 14:57:14.019834 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2434f0b9_846a_444c_b487_745d4010002b.slice/crio-3d06c6fc284feb92aefe314de6491da4d8ed4752eabe5f4dc5c6709aa6023802 WatchSource:0}: Error finding container 3d06c6fc284feb92aefe314de6491da4d8ed4752eabe5f4dc5c6709aa6023802: Status 404 returned error can't find the container with id 3d06c6fc284feb92aefe314de6491da4d8ed4752eabe5f4dc5c6709aa6023802 Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.122663 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-22bkm"] Jan 31 14:57:14 crc kubenswrapper[4763]: W0131 14:57:14.133258 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e24d612_62ed_4bd5_8e07_889710d16851.slice/crio-c82ecb291538ae24976b93059fe5012e9eba52e1be048225cfff7c40417c81ce WatchSource:0}: Error finding container c82ecb291538ae24976b93059fe5012e9eba52e1be048225cfff7c40417c81ce: Status 404 returned error can't find the container with id c82ecb291538ae24976b93059fe5012e9eba52e1be048225cfff7c40417c81ce Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.177409 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.177461 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.181048 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wxzg8"] Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.182147 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wxzg8" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.185816 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.191466 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wxzg8"] Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.210255 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f3cc890-2041-4983-8501-088c40c22b77-catalog-content\") pod \"redhat-operators-wxzg8\" (UID: \"5f3cc890-2041-4983-8501-088c40c22b77\") " pod="openshift-marketplace/redhat-operators-wxzg8" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.210557 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bxzh\" (UniqueName: \"kubernetes.io/projected/5f3cc890-2041-4983-8501-088c40c22b77-kube-api-access-7bxzh\") pod \"redhat-operators-wxzg8\" (UID: \"5f3cc890-2041-4983-8501-088c40c22b77\") " pod="openshift-marketplace/redhat-operators-wxzg8" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.210605 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f3cc890-2041-4983-8501-088c40c22b77-utilities\") pod \"redhat-operators-wxzg8\" (UID: \"5f3cc890-2041-4983-8501-088c40c22b77\") " pod="openshift-marketplace/redhat-operators-wxzg8" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.289282 4763 generic.go:334] "Generic (PLEG): container finished" podID="2434f0b9-846a-444c-b487-745d4010002b" containerID="05d8ffb53e31709980a6ae08442bf22077516244ce4226b0669c91b854ed1579" exitCode=0 Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.289763 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wnmmq" event={"ID":"2434f0b9-846a-444c-b487-745d4010002b","Type":"ContainerDied","Data":"05d8ffb53e31709980a6ae08442bf22077516244ce4226b0669c91b854ed1579"} Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.289816 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wnmmq" event={"ID":"2434f0b9-846a-444c-b487-745d4010002b","Type":"ContainerStarted","Data":"3d06c6fc284feb92aefe314de6491da4d8ed4752eabe5f4dc5c6709aa6023802"} Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.293808 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-22bkm" event={"ID":"1e24d612-62ed-4bd5-8e07-889710d16851","Type":"ContainerStarted","Data":"c82ecb291538ae24976b93059fe5012e9eba52e1be048225cfff7c40417c81ce"} Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.295981 4763 generic.go:334] "Generic (PLEG): container finished" podID="5e7ac98e-5206-404b-a648-3ef3d778619c" containerID="47869dfdc0f5a04213f8f03ac61c750dc9ff1cfcf0748564443776a01410c432" exitCode=0 Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.296028 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5e7ac98e-5206-404b-a648-3ef3d778619c","Type":"ContainerDied","Data":"47869dfdc0f5a04213f8f03ac61c750dc9ff1cfcf0748564443776a01410c432"} Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.303891 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"49e5646c-e6be-4d7c-9839-540a69daf0e9","Type":"ContainerDied","Data":"a0bc2a82642ee311d6552a5e94a028bf0431b120d860d1f88eae5e6a24094925"} Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.303933 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0bc2a82642ee311d6552a5e94a028bf0431b120d860d1f88eae5e6a24094925" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.304008 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.312371 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f3cc890-2041-4983-8501-088c40c22b77-utilities\") pod \"redhat-operators-wxzg8\" (UID: \"5f3cc890-2041-4983-8501-088c40c22b77\") " pod="openshift-marketplace/redhat-operators-wxzg8" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.312467 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f3cc890-2041-4983-8501-088c40c22b77-catalog-content\") pod \"redhat-operators-wxzg8\" (UID: \"5f3cc890-2041-4983-8501-088c40c22b77\") " pod="openshift-marketplace/redhat-operators-wxzg8" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.312512 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bxzh\" (UniqueName: \"kubernetes.io/projected/5f3cc890-2041-4983-8501-088c40c22b77-kube-api-access-7bxzh\") pod \"redhat-operators-wxzg8\" (UID: \"5f3cc890-2041-4983-8501-088c40c22b77\") " pod="openshift-marketplace/redhat-operators-wxzg8" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.313385 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f3cc890-2041-4983-8501-088c40c22b77-catalog-content\") pod \"redhat-operators-wxzg8\" (UID: \"5f3cc890-2041-4983-8501-088c40c22b77\") " pod="openshift-marketplace/redhat-operators-wxzg8" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.313945 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f3cc890-2041-4983-8501-088c40c22b77-utilities\") pod \"redhat-operators-wxzg8\" (UID: \"5f3cc890-2041-4983-8501-088c40c22b77\") " pod="openshift-marketplace/redhat-operators-wxzg8" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.335142 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bxzh\" (UniqueName: \"kubernetes.io/projected/5f3cc890-2041-4983-8501-088c40c22b77-kube-api-access-7bxzh\") pod \"redhat-operators-wxzg8\" (UID: \"5f3cc890-2041-4983-8501-088c40c22b77\") " pod="openshift-marketplace/redhat-operators-wxzg8" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.497105 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wxzg8" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.578112 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4bfmh"] Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.580781 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4bfmh" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.591262 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4bfmh"] Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.616607 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/463b0d45-1b3b-46a1-afbd-650fa065b38f-utilities\") pod \"redhat-operators-4bfmh\" (UID: \"463b0d45-1b3b-46a1-afbd-650fa065b38f\") " pod="openshift-marketplace/redhat-operators-4bfmh" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.616662 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/463b0d45-1b3b-46a1-afbd-650fa065b38f-catalog-content\") pod \"redhat-operators-4bfmh\" (UID: \"463b0d45-1b3b-46a1-afbd-650fa065b38f\") " pod="openshift-marketplace/redhat-operators-4bfmh" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.616685 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9fps\" (UniqueName: \"kubernetes.io/projected/463b0d45-1b3b-46a1-afbd-650fa065b38f-kube-api-access-g9fps\") pod \"redhat-operators-4bfmh\" (UID: \"463b0d45-1b3b-46a1-afbd-650fa065b38f\") " pod="openshift-marketplace/redhat-operators-4bfmh" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.717915 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/463b0d45-1b3b-46a1-afbd-650fa065b38f-utilities\") pod \"redhat-operators-4bfmh\" (UID: \"463b0d45-1b3b-46a1-afbd-650fa065b38f\") " pod="openshift-marketplace/redhat-operators-4bfmh" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.718296 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/463b0d45-1b3b-46a1-afbd-650fa065b38f-catalog-content\") pod \"redhat-operators-4bfmh\" (UID: \"463b0d45-1b3b-46a1-afbd-650fa065b38f\") " pod="openshift-marketplace/redhat-operators-4bfmh" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.718325 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9fps\" (UniqueName: \"kubernetes.io/projected/463b0d45-1b3b-46a1-afbd-650fa065b38f-kube-api-access-g9fps\") pod \"redhat-operators-4bfmh\" (UID: \"463b0d45-1b3b-46a1-afbd-650fa065b38f\") " pod="openshift-marketplace/redhat-operators-4bfmh" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.718582 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/463b0d45-1b3b-46a1-afbd-650fa065b38f-utilities\") pod \"redhat-operators-4bfmh\" (UID: \"463b0d45-1b3b-46a1-afbd-650fa065b38f\") " pod="openshift-marketplace/redhat-operators-4bfmh" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.718983 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/463b0d45-1b3b-46a1-afbd-650fa065b38f-catalog-content\") pod \"redhat-operators-4bfmh\" (UID: \"463b0d45-1b3b-46a1-afbd-650fa065b38f\") " pod="openshift-marketplace/redhat-operators-4bfmh" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.738875 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9fps\" (UniqueName: \"kubernetes.io/projected/463b0d45-1b3b-46a1-afbd-650fa065b38f-kube-api-access-g9fps\") pod \"redhat-operators-4bfmh\" (UID: \"463b0d45-1b3b-46a1-afbd-650fa065b38f\") " pod="openshift-marketplace/redhat-operators-4bfmh" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.787709 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wxzg8"] Jan 31 14:57:14 crc kubenswrapper[4763]: W0131 14:57:14.796389 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f3cc890_2041_4983_8501_088c40c22b77.slice/crio-2abcdafc7fd1d0b73e0182854de8d66e2f3700062dfd88e6ccab246a85b4c70b WatchSource:0}: Error finding container 2abcdafc7fd1d0b73e0182854de8d66e2f3700062dfd88e6ccab246a85b4c70b: Status 404 returned error can't find the container with id 2abcdafc7fd1d0b73e0182854de8d66e2f3700062dfd88e6ccab246a85b4c70b Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.886857 4763 patch_prober.go:28] interesting pod/router-default-5444994796-87f9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:57:14 crc kubenswrapper[4763]: [-]has-synced failed: reason withheld Jan 31 14:57:14 crc kubenswrapper[4763]: [+]process-running ok Jan 31 14:57:14 crc kubenswrapper[4763]: healthz check failed Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.886907 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87f9c" podUID="47ac991e-3a26-4da1-9cf0-6f0944a3bf7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.904608 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4bfmh" Jan 31 14:57:15 crc kubenswrapper[4763]: I0131 14:57:15.316236 4763 generic.go:334] "Generic (PLEG): container finished" podID="1e24d612-62ed-4bd5-8e07-889710d16851" containerID="b5fcefa9833b3ce59a8e38ebaf0025b6e3f0e84dd47d373130e3a6568a1fb10f" exitCode=0 Jan 31 14:57:15 crc kubenswrapper[4763]: I0131 14:57:15.316335 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-22bkm" event={"ID":"1e24d612-62ed-4bd5-8e07-889710d16851","Type":"ContainerDied","Data":"b5fcefa9833b3ce59a8e38ebaf0025b6e3f0e84dd47d373130e3a6568a1fb10f"} Jan 31 14:57:15 crc kubenswrapper[4763]: I0131 14:57:15.319581 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxzg8" event={"ID":"5f3cc890-2041-4983-8501-088c40c22b77","Type":"ContainerStarted","Data":"2abcdafc7fd1d0b73e0182854de8d66e2f3700062dfd88e6ccab246a85b4c70b"} Jan 31 14:57:15 crc kubenswrapper[4763]: I0131 14:57:15.414355 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4bfmh"] Jan 31 14:57:15 crc kubenswrapper[4763]: W0131 14:57:15.440369 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod463b0d45_1b3b_46a1_afbd_650fa065b38f.slice/crio-64d775f9e024e4389d9fed6e15e2915fc320014ae014969ebec947a0370a2ca9 WatchSource:0}: Error finding container 64d775f9e024e4389d9fed6e15e2915fc320014ae014969ebec947a0370a2ca9: Status 404 returned error can't find the container with id 64d775f9e024e4389d9fed6e15e2915fc320014ae014969ebec947a0370a2ca9 Jan 31 14:57:15 crc kubenswrapper[4763]: I0131 14:57:15.636151 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 14:57:15 crc kubenswrapper[4763]: I0131 14:57:15.753361 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e7ac98e-5206-404b-a648-3ef3d778619c-kubelet-dir\") pod \"5e7ac98e-5206-404b-a648-3ef3d778619c\" (UID: \"5e7ac98e-5206-404b-a648-3ef3d778619c\") " Jan 31 14:57:15 crc kubenswrapper[4763]: I0131 14:57:15.753481 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e7ac98e-5206-404b-a648-3ef3d778619c-kube-api-access\") pod \"5e7ac98e-5206-404b-a648-3ef3d778619c\" (UID: \"5e7ac98e-5206-404b-a648-3ef3d778619c\") " Jan 31 14:57:15 crc kubenswrapper[4763]: I0131 14:57:15.754989 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e7ac98e-5206-404b-a648-3ef3d778619c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5e7ac98e-5206-404b-a648-3ef3d778619c" (UID: "5e7ac98e-5206-404b-a648-3ef3d778619c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:57:15 crc kubenswrapper[4763]: I0131 14:57:15.767839 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e7ac98e-5206-404b-a648-3ef3d778619c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5e7ac98e-5206-404b-a648-3ef3d778619c" (UID: "5e7ac98e-5206-404b-a648-3ef3d778619c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:57:15 crc kubenswrapper[4763]: I0131 14:57:15.855386 4763 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e7ac98e-5206-404b-a648-3ef3d778619c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:15 crc kubenswrapper[4763]: I0131 14:57:15.855427 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e7ac98e-5206-404b-a648-3ef3d778619c-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:15 crc kubenswrapper[4763]: I0131 14:57:15.887077 4763 patch_prober.go:28] interesting pod/router-default-5444994796-87f9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:57:15 crc kubenswrapper[4763]: [-]has-synced failed: reason withheld Jan 31 14:57:15 crc kubenswrapper[4763]: [+]process-running ok Jan 31 14:57:15 crc kubenswrapper[4763]: healthz check failed Jan 31 14:57:15 crc kubenswrapper[4763]: I0131 14:57:15.887137 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87f9c" podUID="47ac991e-3a26-4da1-9cf0-6f0944a3bf7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:57:16 crc kubenswrapper[4763]: I0131 14:57:16.328022 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5e7ac98e-5206-404b-a648-3ef3d778619c","Type":"ContainerDied","Data":"ddb43162b5b5c5ccf5049c1ecc38ca80a82b1da0bd3e6319c633833964c09b50"} Jan 31 14:57:16 crc kubenswrapper[4763]: I0131 14:57:16.328046 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 14:57:16 crc kubenswrapper[4763]: I0131 14:57:16.328059 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddb43162b5b5c5ccf5049c1ecc38ca80a82b1da0bd3e6319c633833964c09b50" Jan 31 14:57:16 crc kubenswrapper[4763]: I0131 14:57:16.330128 4763 generic.go:334] "Generic (PLEG): container finished" podID="463b0d45-1b3b-46a1-afbd-650fa065b38f" containerID="af0144e6d97f243992322cb824634e0232c08a887c39ce09faf5551f212c4c7e" exitCode=0 Jan 31 14:57:16 crc kubenswrapper[4763]: I0131 14:57:16.330194 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bfmh" event={"ID":"463b0d45-1b3b-46a1-afbd-650fa065b38f","Type":"ContainerDied","Data":"af0144e6d97f243992322cb824634e0232c08a887c39ce09faf5551f212c4c7e"} Jan 31 14:57:16 crc kubenswrapper[4763]: I0131 14:57:16.330222 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bfmh" event={"ID":"463b0d45-1b3b-46a1-afbd-650fa065b38f","Type":"ContainerStarted","Data":"64d775f9e024e4389d9fed6e15e2915fc320014ae014969ebec947a0370a2ca9"} Jan 31 14:57:16 crc kubenswrapper[4763]: I0131 14:57:16.332075 4763 generic.go:334] "Generic (PLEG): container finished" podID="5f3cc890-2041-4983-8501-088c40c22b77" containerID="427edc0ea2301ea88efdfc1bb087b1cc85c3e9bc24e47c82fe06ab3272a59f18" exitCode=0 Jan 31 14:57:16 crc kubenswrapper[4763]: I0131 14:57:16.332098 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxzg8" event={"ID":"5f3cc890-2041-4983-8501-088c40c22b77","Type":"ContainerDied","Data":"427edc0ea2301ea88efdfc1bb087b1cc85c3e9bc24e47c82fe06ab3272a59f18"} Jan 31 14:57:16 crc kubenswrapper[4763]: I0131 14:57:16.884832 4763 patch_prober.go:28] interesting pod/router-default-5444994796-87f9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:57:16 crc kubenswrapper[4763]: [-]has-synced failed: reason withheld Jan 31 14:57:16 crc kubenswrapper[4763]: [+]process-running ok Jan 31 14:57:16 crc kubenswrapper[4763]: healthz check failed Jan 31 14:57:16 crc kubenswrapper[4763]: I0131 14:57:16.884906 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87f9c" podUID="47ac991e-3a26-4da1-9cf0-6f0944a3bf7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:57:17 crc kubenswrapper[4763]: I0131 14:57:17.013168 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:17 crc kubenswrapper[4763]: I0131 14:57:17.018900 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:17 crc kubenswrapper[4763]: I0131 14:57:17.372208 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-l8kn4" Jan 31 14:57:17 crc kubenswrapper[4763]: I0131 14:57:17.885095 4763 patch_prober.go:28] interesting pod/router-default-5444994796-87f9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:57:17 crc kubenswrapper[4763]: [-]has-synced failed: reason withheld Jan 31 14:57:17 crc kubenswrapper[4763]: [+]process-running ok Jan 31 14:57:17 crc kubenswrapper[4763]: healthz check failed Jan 31 14:57:17 crc kubenswrapper[4763]: I0131 14:57:17.885160 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87f9c" podUID="47ac991e-3a26-4da1-9cf0-6f0944a3bf7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:57:18 crc kubenswrapper[4763]: I0131 14:57:18.886065 4763 patch_prober.go:28] interesting pod/router-default-5444994796-87f9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:57:18 crc kubenswrapper[4763]: [-]has-synced failed: reason withheld Jan 31 14:57:18 crc kubenswrapper[4763]: [+]process-running ok Jan 31 14:57:18 crc kubenswrapper[4763]: healthz check failed Jan 31 14:57:18 crc kubenswrapper[4763]: I0131 14:57:18.886148 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87f9c" podUID="47ac991e-3a26-4da1-9cf0-6f0944a3bf7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:57:19 crc kubenswrapper[4763]: I0131 14:57:19.885154 4763 patch_prober.go:28] interesting pod/router-default-5444994796-87f9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:57:19 crc kubenswrapper[4763]: [-]has-synced failed: reason withheld Jan 31 14:57:19 crc kubenswrapper[4763]: [+]process-running ok Jan 31 14:57:19 crc kubenswrapper[4763]: healthz check failed Jan 31 14:57:19 crc kubenswrapper[4763]: I0131 14:57:19.885426 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87f9c" podUID="47ac991e-3a26-4da1-9cf0-6f0944a3bf7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:57:20 crc kubenswrapper[4763]: I0131 14:57:20.886075 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:20 crc kubenswrapper[4763]: I0131 14:57:20.889185 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:21 crc kubenswrapper[4763]: I0131 14:57:21.380933 4763 patch_prober.go:28] interesting pod/console-f9d7485db-9lvgt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 31 14:57:21 crc kubenswrapper[4763]: I0131 14:57:21.381312 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-9lvgt" podUID="0cddc243-3a83-4398-87a9-7a111581bec5" containerName="console" probeResult="failure" output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 31 14:57:21 crc kubenswrapper[4763]: I0131 14:57:21.593541 4763 patch_prober.go:28] interesting pod/downloads-7954f5f757-bh727 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 31 14:57:21 crc kubenswrapper[4763]: I0131 14:57:21.594055 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bh727" podUID="db0aea6c-f6f8-4548-905b-22d810b334d4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 31 14:57:21 crc kubenswrapper[4763]: I0131 14:57:21.593580 4763 patch_prober.go:28] interesting pod/downloads-7954f5f757-bh727 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 31 14:57:21 crc kubenswrapper[4763]: I0131 14:57:21.594539 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bh727" podUID="db0aea6c-f6f8-4548-905b-22d810b334d4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 31 14:57:23 crc kubenswrapper[4763]: I0131 14:57:23.070193 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs\") pod \"network-metrics-daemon-26pm5\" (UID: \"84302428-88e1-47ba-84cc-7d12472f9aa2\") " pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:57:23 crc kubenswrapper[4763]: I0131 14:57:23.081045 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs\") pod \"network-metrics-daemon-26pm5\" (UID: \"84302428-88e1-47ba-84cc-7d12472f9aa2\") " pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:57:23 crc kubenswrapper[4763]: I0131 14:57:23.379718 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:57:25 crc kubenswrapper[4763]: I0131 14:57:25.908934 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bpxtg"] Jan 31 14:57:25 crc kubenswrapper[4763]: I0131 14:57:25.909196 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" podUID="b67dedfb-accc-467d-a3bb-508eab4f88c8" containerName="controller-manager" containerID="cri-o://b8fb7a2f200b2701b2184c4c721af8933812270f9a51bf230ed65be2f7cd5bd1" gracePeriod=30 Jan 31 14:57:25 crc kubenswrapper[4763]: I0131 14:57:25.912843 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg"] Jan 31 14:57:25 crc kubenswrapper[4763]: I0131 14:57:25.913111 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" podUID="e1b409a5-8274-478d-98bf-fe2171d90c63" containerName="route-controller-manager" containerID="cri-o://c58f9ab4796cf505743aa587609be5cc9b4b478501a00ba9268fe2ccf4583f9a" gracePeriod=30 Jan 31 14:57:30 crc kubenswrapper[4763]: I0131 14:57:30.902572 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:31 crc kubenswrapper[4763]: I0131 14:57:31.287935 4763 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-bpxtg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 31 14:57:31 crc kubenswrapper[4763]: I0131 14:57:31.287998 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" podUID="b67dedfb-accc-467d-a3bb-508eab4f88c8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 31 14:57:31 crc kubenswrapper[4763]: I0131 14:57:31.384628 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:57:31 crc kubenswrapper[4763]: I0131 14:57:31.388128 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:57:31 crc kubenswrapper[4763]: I0131 14:57:31.605965 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-bh727" Jan 31 14:57:32 crc kubenswrapper[4763]: I0131 14:57:32.049581 4763 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-mpmpg container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 31 14:57:32 crc kubenswrapper[4763]: I0131 14:57:32.050160 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" podUID="e1b409a5-8274-478d-98bf-fe2171d90c63" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 31 14:57:32 crc kubenswrapper[4763]: I0131 14:57:32.435403 4763 generic.go:334] "Generic (PLEG): container finished" podID="e1b409a5-8274-478d-98bf-fe2171d90c63" containerID="c58f9ab4796cf505743aa587609be5cc9b4b478501a00ba9268fe2ccf4583f9a" exitCode=0 Jan 31 14:57:32 crc kubenswrapper[4763]: I0131 14:57:32.435473 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" event={"ID":"e1b409a5-8274-478d-98bf-fe2171d90c63","Type":"ContainerDied","Data":"c58f9ab4796cf505743aa587609be5cc9b4b478501a00ba9268fe2ccf4583f9a"} Jan 31 14:57:33 crc kubenswrapper[4763]: I0131 14:57:33.446666 4763 generic.go:334] "Generic (PLEG): container finished" podID="b67dedfb-accc-467d-a3bb-508eab4f88c8" containerID="b8fb7a2f200b2701b2184c4c721af8933812270f9a51bf230ed65be2f7cd5bd1" exitCode=0 Jan 31 14:57:33 crc kubenswrapper[4763]: I0131 14:57:33.446800 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" event={"ID":"b67dedfb-accc-467d-a3bb-508eab4f88c8","Type":"ContainerDied","Data":"b8fb7a2f200b2701b2184c4c721af8933812270f9a51bf230ed65be2f7cd5bd1"} Jan 31 14:57:41 crc kubenswrapper[4763]: I0131 14:57:41.287082 4763 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-bpxtg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 31 14:57:41 crc kubenswrapper[4763]: I0131 14:57:41.287885 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" podUID="b67dedfb-accc-467d-a3bb-508eab4f88c8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 31 14:57:42 crc kubenswrapper[4763]: I0131 14:57:42.586216 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l8wxr" Jan 31 14:57:43 crc kubenswrapper[4763]: I0131 14:57:43.049632 4763 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-mpmpg container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 14:57:43 crc kubenswrapper[4763]: I0131 14:57:43.049722 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" podUID="e1b409a5-8274-478d-98bf-fe2171d90c63" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 14:57:44 crc kubenswrapper[4763]: I0131 14:57:44.177396 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 14:57:44 crc kubenswrapper[4763]: I0131 14:57:44.177470 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 14:57:46 crc kubenswrapper[4763]: E0131 14:57:46.965936 4763 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:d66a84cc704878f5e58a60c449eb4244b9e250105c614dae1d2418e90b51befa: Get \"https://registry.redhat.io/v2/redhat/redhat-marketplace-index/blobs/sha256:d66a84cc704878f5e58a60c449eb4244b9e250105c614dae1d2418e90b51befa\": context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 31 14:57:46 crc kubenswrapper[4763]: E0131 14:57:46.966584 4763 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tzr25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-wnmmq_openshift-marketplace(2434f0b9-846a-444c-b487-745d4010002b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:d66a84cc704878f5e58a60c449eb4244b9e250105c614dae1d2418e90b51befa: Get \"https://registry.redhat.io/v2/redhat/redhat-marketplace-index/blobs/sha256:d66a84cc704878f5e58a60c449eb4244b9e250105c614dae1d2418e90b51befa\": context canceled" logger="UnhandledError" Jan 31 14:57:46 crc kubenswrapper[4763]: E0131 14:57:46.967896 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:d66a84cc704878f5e58a60c449eb4244b9e250105c614dae1d2418e90b51befa: Get \\\"https://registry.redhat.io/v2/redhat/redhat-marketplace-index/blobs/sha256:d66a84cc704878f5e58a60c449eb4244b9e250105c614dae1d2418e90b51befa\\\": context canceled\"" pod="openshift-marketplace/redhat-marketplace-wnmmq" podUID="2434f0b9-846a-444c-b487-745d4010002b" Jan 31 14:57:47 crc kubenswrapper[4763]: E0131 14:57:47.009249 4763 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 31 14:57:47 crc kubenswrapper[4763]: E0131 14:57:47.009436 4763 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vk9ln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-k6ddv_openshift-marketplace(b8a35a73-67a0-4bb4-9954-46350d31b017): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 14:57:47 crc kubenswrapper[4763]: E0131 14:57:47.010722 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-k6ddv" podUID="b8a35a73-67a0-4bb4-9954-46350d31b017" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.033803 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.092514 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k"] Jan 31 14:57:47 crc kubenswrapper[4763]: E0131 14:57:47.092800 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1b409a5-8274-478d-98bf-fe2171d90c63" containerName="route-controller-manager" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.092815 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1b409a5-8274-478d-98bf-fe2171d90c63" containerName="route-controller-manager" Jan 31 14:57:47 crc kubenswrapper[4763]: E0131 14:57:47.092828 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e7ac98e-5206-404b-a648-3ef3d778619c" containerName="pruner" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.092836 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e7ac98e-5206-404b-a648-3ef3d778619c" containerName="pruner" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.092969 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1b409a5-8274-478d-98bf-fe2171d90c63" containerName="route-controller-manager" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.092987 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e7ac98e-5206-404b-a648-3ef3d778619c" containerName="pruner" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.093382 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k"] Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.093468 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" Jan 31 14:57:47 crc kubenswrapper[4763]: E0131 14:57:47.097680 4763 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 31 14:57:47 crc kubenswrapper[4763]: E0131 14:57:47.097862 4763 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m4rwh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-mr7l4_openshift-marketplace(f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 14:57:47 crc kubenswrapper[4763]: E0131 14:57:47.099031 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-mr7l4" podUID="f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa" Jan 31 14:57:47 crc kubenswrapper[4763]: E0131 14:57:47.118992 4763 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 31 14:57:47 crc kubenswrapper[4763]: E0131 14:57:47.119140 4763 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sq9nf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-4m6qg_openshift-marketplace(5a85c02e-9d6e-4d11-be81-242bf4fee8c4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 14:57:47 crc kubenswrapper[4763]: E0131 14:57:47.120550 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-4m6qg" podUID="5a85c02e-9d6e-4d11-be81-242bf4fee8c4" Jan 31 14:57:47 crc kubenswrapper[4763]: E0131 14:57:47.202077 4763 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 31 14:57:47 crc kubenswrapper[4763]: E0131 14:57:47.203037 4763 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rrvp2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9df4p_openshift-marketplace(5c097873-7ca4-491d-86c4-31b2ab99d63d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 14:57:47 crc kubenswrapper[4763]: E0131 14:57:47.204903 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-9df4p" podUID="5c097873-7ca4-491d-86c4-31b2ab99d63d" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.220061 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1b409a5-8274-478d-98bf-fe2171d90c63-client-ca\") pod \"e1b409a5-8274-478d-98bf-fe2171d90c63\" (UID: \"e1b409a5-8274-478d-98bf-fe2171d90c63\") " Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.220125 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1b409a5-8274-478d-98bf-fe2171d90c63-config\") pod \"e1b409a5-8274-478d-98bf-fe2171d90c63\" (UID: \"e1b409a5-8274-478d-98bf-fe2171d90c63\") " Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.221427 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1b409a5-8274-478d-98bf-fe2171d90c63-config" (OuterVolumeSpecName: "config") pod "e1b409a5-8274-478d-98bf-fe2171d90c63" (UID: "e1b409a5-8274-478d-98bf-fe2171d90c63"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.221469 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1b409a5-8274-478d-98bf-fe2171d90c63-client-ca" (OuterVolumeSpecName: "client-ca") pod "e1b409a5-8274-478d-98bf-fe2171d90c63" (UID: "e1b409a5-8274-478d-98bf-fe2171d90c63"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.221533 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7cfv\" (UniqueName: \"kubernetes.io/projected/e1b409a5-8274-478d-98bf-fe2171d90c63-kube-api-access-v7cfv\") pod \"e1b409a5-8274-478d-98bf-fe2171d90c63\" (UID: \"e1b409a5-8274-478d-98bf-fe2171d90c63\") " Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.221563 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1b409a5-8274-478d-98bf-fe2171d90c63-serving-cert\") pod \"e1b409a5-8274-478d-98bf-fe2171d90c63\" (UID: \"e1b409a5-8274-478d-98bf-fe2171d90c63\") " Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.222684 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-client-ca\") pod \"route-controller-manager-6d78d477f7-cmh4k\" (UID: \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\") " pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.222958 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-config\") pod \"route-controller-manager-6d78d477f7-cmh4k\" (UID: \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\") " pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.222978 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-serving-cert\") pod \"route-controller-manager-6d78d477f7-cmh4k\" (UID: \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\") " pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.223015 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrm55\" (UniqueName: \"kubernetes.io/projected/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-kube-api-access-nrm55\") pod \"route-controller-manager-6d78d477f7-cmh4k\" (UID: \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\") " pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.223070 4763 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1b409a5-8274-478d-98bf-fe2171d90c63-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.223082 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1b409a5-8274-478d-98bf-fe2171d90c63-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.238666 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1b409a5-8274-478d-98bf-fe2171d90c63-kube-api-access-v7cfv" (OuterVolumeSpecName: "kube-api-access-v7cfv") pod "e1b409a5-8274-478d-98bf-fe2171d90c63" (UID: "e1b409a5-8274-478d-98bf-fe2171d90c63"). InnerVolumeSpecName "kube-api-access-v7cfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.259543 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1b409a5-8274-478d-98bf-fe2171d90c63-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e1b409a5-8274-478d-98bf-fe2171d90c63" (UID: "e1b409a5-8274-478d-98bf-fe2171d90c63"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.323722 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-config\") pod \"route-controller-manager-6d78d477f7-cmh4k\" (UID: \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\") " pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.323768 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-serving-cert\") pod \"route-controller-manager-6d78d477f7-cmh4k\" (UID: \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\") " pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.323804 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrm55\" (UniqueName: \"kubernetes.io/projected/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-kube-api-access-nrm55\") pod \"route-controller-manager-6d78d477f7-cmh4k\" (UID: \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\") " pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.323870 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-client-ca\") pod \"route-controller-manager-6d78d477f7-cmh4k\" (UID: \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\") " pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.324245 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7cfv\" (UniqueName: \"kubernetes.io/projected/e1b409a5-8274-478d-98bf-fe2171d90c63-kube-api-access-v7cfv\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.325856 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1b409a5-8274-478d-98bf-fe2171d90c63-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.325653 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-client-ca\") pod \"route-controller-manager-6d78d477f7-cmh4k\" (UID: \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\") " pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.325491 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-config\") pod \"route-controller-manager-6d78d477f7-cmh4k\" (UID: \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\") " pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.343567 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrm55\" (UniqueName: \"kubernetes.io/projected/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-kube-api-access-nrm55\") pod \"route-controller-manager-6d78d477f7-cmh4k\" (UID: \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\") " pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.353207 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-serving-cert\") pod \"route-controller-manager-6d78d477f7-cmh4k\" (UID: \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\") " pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.431635 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.541795 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.541871 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" event={"ID":"e1b409a5-8274-478d-98bf-fe2171d90c63","Type":"ContainerDied","Data":"7b560d0bfea717d413c6f7f997b55322350cc5d7f6770006679df4dc57bee56a"} Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.541950 4763 scope.go:117] "RemoveContainer" containerID="c58f9ab4796cf505743aa587609be5cc9b4b478501a00ba9268fe2ccf4583f9a" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.552723 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-26pm5"] Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.666970 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg"] Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.669074 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg"] Jan 31 14:57:48 crc kubenswrapper[4763]: I0131 14:57:48.318431 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 14:57:48 crc kubenswrapper[4763]: I0131 14:57:48.319354 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 14:57:48 crc kubenswrapper[4763]: I0131 14:57:48.325785 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 31 14:57:48 crc kubenswrapper[4763]: I0131 14:57:48.326161 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 31 14:57:48 crc kubenswrapper[4763]: I0131 14:57:48.332218 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 14:57:48 crc kubenswrapper[4763]: I0131 14:57:48.375011 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:57:48 crc kubenswrapper[4763]: I0131 14:57:48.439166 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/985646ce-82c3-4387-8f8d-bf1ac731426c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"985646ce-82c3-4387-8f8d-bf1ac731426c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 14:57:48 crc kubenswrapper[4763]: I0131 14:57:48.439205 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/985646ce-82c3-4387-8f8d-bf1ac731426c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"985646ce-82c3-4387-8f8d-bf1ac731426c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 14:57:48 crc kubenswrapper[4763]: I0131 14:57:48.541220 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/985646ce-82c3-4387-8f8d-bf1ac731426c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"985646ce-82c3-4387-8f8d-bf1ac731426c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 14:57:48 crc kubenswrapper[4763]: I0131 14:57:48.541303 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/985646ce-82c3-4387-8f8d-bf1ac731426c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"985646ce-82c3-4387-8f8d-bf1ac731426c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 14:57:48 crc kubenswrapper[4763]: I0131 14:57:48.541358 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/985646ce-82c3-4387-8f8d-bf1ac731426c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"985646ce-82c3-4387-8f8d-bf1ac731426c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 14:57:48 crc kubenswrapper[4763]: I0131 14:57:48.557026 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/985646ce-82c3-4387-8f8d-bf1ac731426c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"985646ce-82c3-4387-8f8d-bf1ac731426c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 14:57:48 crc kubenswrapper[4763]: I0131 14:57:48.674760 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.048988 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1b409a5-8274-478d-98bf-fe2171d90c63" path="/var/lib/kubelet/pods/e1b409a5-8274-478d-98bf-fe2171d90c63/volumes" Jan 31 14:57:49 crc kubenswrapper[4763]: E0131 14:57:49.423826 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-k6ddv" podUID="b8a35a73-67a0-4bb4-9954-46350d31b017" Jan 31 14:57:49 crc kubenswrapper[4763]: E0131 14:57:49.423981 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wnmmq" podUID="2434f0b9-846a-444c-b487-745d4010002b" Jan 31 14:57:49 crc kubenswrapper[4763]: E0131 14:57:49.424082 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-4m6qg" podUID="5a85c02e-9d6e-4d11-be81-242bf4fee8c4" Jan 31 14:57:49 crc kubenswrapper[4763]: E0131 14:57:49.424173 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9df4p" podUID="5c097873-7ca4-491d-86c4-31b2ab99d63d" Jan 31 14:57:49 crc kubenswrapper[4763]: E0131 14:57:49.424576 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-mr7l4" podUID="f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa" Jan 31 14:57:49 crc kubenswrapper[4763]: E0131 14:57:49.474926 4763 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 31 14:57:49 crc kubenswrapper[4763]: E0131 14:57:49.475092 4763 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dshgn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-22bkm_openshift-marketplace(1e24d612-62ed-4bd5-8e07-889710d16851): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 14:57:49 crc kubenswrapper[4763]: E0131 14:57:49.476470 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-22bkm" podUID="1e24d612-62ed-4bd5-8e07-889710d16851" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.529624 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.556313 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5"] Jan 31 14:57:49 crc kubenswrapper[4763]: E0131 14:57:49.556641 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b67dedfb-accc-467d-a3bb-508eab4f88c8" containerName="controller-manager" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.556663 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b67dedfb-accc-467d-a3bb-508eab4f88c8" containerName="controller-manager" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.556827 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="b67dedfb-accc-467d-a3bb-508eab4f88c8" containerName="controller-manager" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.557609 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.560455 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" event={"ID":"b67dedfb-accc-467d-a3bb-508eab4f88c8","Type":"ContainerDied","Data":"5825be7f5b0a2372a0714ff20d5e467974a43da635470237d607739086eb1094"} Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.560520 4763 scope.go:117] "RemoveContainer" containerID="b8fb7a2f200b2701b2184c4c721af8933812270f9a51bf230ed65be2f7cd5bd1" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.560594 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.565058 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5"] Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.584906 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-26pm5" event={"ID":"84302428-88e1-47ba-84cc-7d12472f9aa2","Type":"ContainerStarted","Data":"fd29a6a06684bc8f2ef118669fc48ea72399bc351cac7924ed1d06f13faf06d9"} Jan 31 14:57:49 crc kubenswrapper[4763]: E0131 14:57:49.586945 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-22bkm" podUID="1e24d612-62ed-4bd5-8e07-889710d16851" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.656605 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b67dedfb-accc-467d-a3bb-508eab4f88c8-proxy-ca-bundles\") pod \"b67dedfb-accc-467d-a3bb-508eab4f88c8\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.656742 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b67dedfb-accc-467d-a3bb-508eab4f88c8-config\") pod \"b67dedfb-accc-467d-a3bb-508eab4f88c8\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.656797 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b67dedfb-accc-467d-a3bb-508eab4f88c8-serving-cert\") pod \"b67dedfb-accc-467d-a3bb-508eab4f88c8\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.656829 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5csj\" (UniqueName: \"kubernetes.io/projected/b67dedfb-accc-467d-a3bb-508eab4f88c8-kube-api-access-j5csj\") pod \"b67dedfb-accc-467d-a3bb-508eab4f88c8\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.656872 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b67dedfb-accc-467d-a3bb-508eab4f88c8-client-ca\") pod \"b67dedfb-accc-467d-a3bb-508eab4f88c8\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.658224 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b67dedfb-accc-467d-a3bb-508eab4f88c8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b67dedfb-accc-467d-a3bb-508eab4f88c8" (UID: "b67dedfb-accc-467d-a3bb-508eab4f88c8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.658428 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b67dedfb-accc-467d-a3bb-508eab4f88c8-config" (OuterVolumeSpecName: "config") pod "b67dedfb-accc-467d-a3bb-508eab4f88c8" (UID: "b67dedfb-accc-467d-a3bb-508eab4f88c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.658782 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b67dedfb-accc-467d-a3bb-508eab4f88c8-client-ca" (OuterVolumeSpecName: "client-ca") pod "b67dedfb-accc-467d-a3bb-508eab4f88c8" (UID: "b67dedfb-accc-467d-a3bb-508eab4f88c8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.664121 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b67dedfb-accc-467d-a3bb-508eab4f88c8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b67dedfb-accc-467d-a3bb-508eab4f88c8" (UID: "b67dedfb-accc-467d-a3bb-508eab4f88c8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.667139 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b67dedfb-accc-467d-a3bb-508eab4f88c8-kube-api-access-j5csj" (OuterVolumeSpecName: "kube-api-access-j5csj") pod "b67dedfb-accc-467d-a3bb-508eab4f88c8" (UID: "b67dedfb-accc-467d-a3bb-508eab4f88c8"). InnerVolumeSpecName "kube-api-access-j5csj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.703338 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k"] Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.746044 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 14:57:49 crc kubenswrapper[4763]: W0131 14:57:49.748526 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod985646ce_82c3_4387_8f8d_bf1ac731426c.slice/crio-438aba147fbe4efa4a5da0545ebc6a5f8f702a5e276652ac1c9ea1976cafb170 WatchSource:0}: Error finding container 438aba147fbe4efa4a5da0545ebc6a5f8f702a5e276652ac1c9ea1976cafb170: Status 404 returned error can't find the container with id 438aba147fbe4efa4a5da0545ebc6a5f8f702a5e276652ac1c9ea1976cafb170 Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.758391 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-client-ca\") pod \"controller-manager-856c6cb8d4-c5dg5\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.758470 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-config\") pod \"controller-manager-856c6cb8d4-c5dg5\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.758495 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-serving-cert\") pod \"controller-manager-856c6cb8d4-c5dg5\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.758512 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sxs7\" (UniqueName: \"kubernetes.io/projected/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-kube-api-access-5sxs7\") pod \"controller-manager-856c6cb8d4-c5dg5\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.758529 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-proxy-ca-bundles\") pod \"controller-manager-856c6cb8d4-c5dg5\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.758562 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b67dedfb-accc-467d-a3bb-508eab4f88c8-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.758571 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b67dedfb-accc-467d-a3bb-508eab4f88c8-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.758581 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5csj\" (UniqueName: \"kubernetes.io/projected/b67dedfb-accc-467d-a3bb-508eab4f88c8-kube-api-access-j5csj\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.758591 4763 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b67dedfb-accc-467d-a3bb-508eab4f88c8-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.758598 4763 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b67dedfb-accc-467d-a3bb-508eab4f88c8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.859756 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-config\") pod \"controller-manager-856c6cb8d4-c5dg5\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.859824 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-serving-cert\") pod \"controller-manager-856c6cb8d4-c5dg5\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.859853 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sxs7\" (UniqueName: \"kubernetes.io/projected/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-kube-api-access-5sxs7\") pod \"controller-manager-856c6cb8d4-c5dg5\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.859881 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-proxy-ca-bundles\") pod \"controller-manager-856c6cb8d4-c5dg5\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.859931 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-client-ca\") pod \"controller-manager-856c6cb8d4-c5dg5\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.860922 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-client-ca\") pod \"controller-manager-856c6cb8d4-c5dg5\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.861339 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-config\") pod \"controller-manager-856c6cb8d4-c5dg5\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.861396 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-proxy-ca-bundles\") pod \"controller-manager-856c6cb8d4-c5dg5\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.865636 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-serving-cert\") pod \"controller-manager-856c6cb8d4-c5dg5\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.878781 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sxs7\" (UniqueName: \"kubernetes.io/projected/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-kube-api-access-5sxs7\") pod \"controller-manager-856c6cb8d4-c5dg5\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.883739 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.891193 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bpxtg"] Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.893921 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bpxtg"] Jan 31 14:57:50 crc kubenswrapper[4763]: I0131 14:57:50.305614 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5"] Jan 31 14:57:50 crc kubenswrapper[4763]: I0131 14:57:50.602235 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" event={"ID":"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27","Type":"ContainerStarted","Data":"d9c7bc88fb3b8b64c3cde989bbf4101e673826378adc8f418f2c372b7ca2d451"} Jan 31 14:57:50 crc kubenswrapper[4763]: I0131 14:57:50.602284 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" event={"ID":"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27","Type":"ContainerStarted","Data":"73fa7d5ead40a096c0ba2f6504c9a4404caea7c60cd62d26d61630f96fde5c3e"} Jan 31 14:57:50 crc kubenswrapper[4763]: I0131 14:57:50.602586 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" Jan 31 14:57:50 crc kubenswrapper[4763]: I0131 14:57:50.605155 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" event={"ID":"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8","Type":"ContainerStarted","Data":"e170936a60a2728421300c994629758ecb1f18fd2343bfef6ccce6f63a3ad4b9"} Jan 31 14:57:50 crc kubenswrapper[4763]: I0131 14:57:50.605186 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" event={"ID":"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8","Type":"ContainerStarted","Data":"e8430d0dd8e723932922589437322bb5e5478cc04d3605ffd87f3ce8b909e891"} Jan 31 14:57:50 crc kubenswrapper[4763]: I0131 14:57:50.605580 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:57:50 crc kubenswrapper[4763]: I0131 14:57:50.608854 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" Jan 31 14:57:50 crc kubenswrapper[4763]: I0131 14:57:50.609366 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-26pm5" event={"ID":"84302428-88e1-47ba-84cc-7d12472f9aa2","Type":"ContainerStarted","Data":"cb6d2ccafa4331eb8e48c12622c9ab41a74f8718e5b5a0f5ab2e705ff4060b00"} Jan 31 14:57:50 crc kubenswrapper[4763]: I0131 14:57:50.609396 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-26pm5" event={"ID":"84302428-88e1-47ba-84cc-7d12472f9aa2","Type":"ContainerStarted","Data":"c91c6ce9c4ef12580ce3cd84831fd39147072d13711ed46a5cb8d64a6a605274"} Jan 31 14:57:50 crc kubenswrapper[4763]: I0131 14:57:50.612099 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:57:50 crc kubenswrapper[4763]: I0131 14:57:50.614069 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"985646ce-82c3-4387-8f8d-bf1ac731426c","Type":"ContainerStarted","Data":"4e5eb8555feeeb7ac64a72a4b6f44a1ed5577133c84b27e11cbefaf9bba1a20e"} Jan 31 14:57:50 crc kubenswrapper[4763]: I0131 14:57:50.614091 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"985646ce-82c3-4387-8f8d-bf1ac731426c","Type":"ContainerStarted","Data":"438aba147fbe4efa4a5da0545ebc6a5f8f702a5e276652ac1c9ea1976cafb170"} Jan 31 14:57:50 crc kubenswrapper[4763]: I0131 14:57:50.624514 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" podStartSLOduration=5.624491403 podStartE2EDuration="5.624491403s" podCreationTimestamp="2026-01-31 14:57:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:50.617600597 +0000 UTC m=+190.372338880" watchObservedRunningTime="2026-01-31 14:57:50.624491403 +0000 UTC m=+190.379229706" Jan 31 14:57:50 crc kubenswrapper[4763]: I0131 14:57:50.650068 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" podStartSLOduration=5.65005091 podStartE2EDuration="5.65005091s" podCreationTimestamp="2026-01-31 14:57:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:50.646759288 +0000 UTC m=+190.401497601" watchObservedRunningTime="2026-01-31 14:57:50.65005091 +0000 UTC m=+190.404789203" Jan 31 14:57:50 crc kubenswrapper[4763]: I0131 14:57:50.679201 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-26pm5" podStartSLOduration=170.67918005 podStartE2EDuration="2m50.67918005s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:50.678686494 +0000 UTC m=+190.433424777" watchObservedRunningTime="2026-01-31 14:57:50.67918005 +0000 UTC m=+190.433918343" Jan 31 14:57:50 crc kubenswrapper[4763]: I0131 14:57:50.679688 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.679680425 podStartE2EDuration="2.679680425s" podCreationTimestamp="2026-01-31 14:57:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:50.665129501 +0000 UTC m=+190.419867804" watchObservedRunningTime="2026-01-31 14:57:50.679680425 +0000 UTC m=+190.434418718" Jan 31 14:57:51 crc kubenswrapper[4763]: I0131 14:57:51.050549 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b67dedfb-accc-467d-a3bb-508eab4f88c8" path="/var/lib/kubelet/pods/b67dedfb-accc-467d-a3bb-508eab4f88c8/volumes" Jan 31 14:57:51 crc kubenswrapper[4763]: I0131 14:57:51.622502 4763 generic.go:334] "Generic (PLEG): container finished" podID="985646ce-82c3-4387-8f8d-bf1ac731426c" containerID="4e5eb8555feeeb7ac64a72a4b6f44a1ed5577133c84b27e11cbefaf9bba1a20e" exitCode=0 Jan 31 14:57:51 crc kubenswrapper[4763]: I0131 14:57:51.622560 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"985646ce-82c3-4387-8f8d-bf1ac731426c","Type":"ContainerDied","Data":"4e5eb8555feeeb7ac64a72a4b6f44a1ed5577133c84b27e11cbefaf9bba1a20e"} Jan 31 14:57:55 crc kubenswrapper[4763]: I0131 14:57:55.112229 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 14:57:55 crc kubenswrapper[4763]: I0131 14:57:55.113852 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:57:55 crc kubenswrapper[4763]: I0131 14:57:55.133272 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 14:57:55 crc kubenswrapper[4763]: I0131 14:57:55.232708 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e242425d-c262-45a8-b933-a84abec6740e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e242425d-c262-45a8-b933-a84abec6740e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:57:55 crc kubenswrapper[4763]: I0131 14:57:55.232778 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e242425d-c262-45a8-b933-a84abec6740e-var-lock\") pod \"installer-9-crc\" (UID: \"e242425d-c262-45a8-b933-a84abec6740e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:57:55 crc kubenswrapper[4763]: I0131 14:57:55.232798 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e242425d-c262-45a8-b933-a84abec6740e-kube-api-access\") pod \"installer-9-crc\" (UID: \"e242425d-c262-45a8-b933-a84abec6740e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:57:55 crc kubenswrapper[4763]: I0131 14:57:55.334198 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e242425d-c262-45a8-b933-a84abec6740e-var-lock\") pod \"installer-9-crc\" (UID: \"e242425d-c262-45a8-b933-a84abec6740e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:57:55 crc kubenswrapper[4763]: I0131 14:57:55.334264 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e242425d-c262-45a8-b933-a84abec6740e-kube-api-access\") pod \"installer-9-crc\" (UID: \"e242425d-c262-45a8-b933-a84abec6740e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:57:55 crc kubenswrapper[4763]: I0131 14:57:55.334339 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e242425d-c262-45a8-b933-a84abec6740e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e242425d-c262-45a8-b933-a84abec6740e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:57:55 crc kubenswrapper[4763]: I0131 14:57:55.334450 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e242425d-c262-45a8-b933-a84abec6740e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e242425d-c262-45a8-b933-a84abec6740e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:57:55 crc kubenswrapper[4763]: I0131 14:57:55.334494 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e242425d-c262-45a8-b933-a84abec6740e-var-lock\") pod \"installer-9-crc\" (UID: \"e242425d-c262-45a8-b933-a84abec6740e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:57:55 crc kubenswrapper[4763]: I0131 14:57:55.357388 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e242425d-c262-45a8-b933-a84abec6740e-kube-api-access\") pod \"installer-9-crc\" (UID: \"e242425d-c262-45a8-b933-a84abec6740e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:57:55 crc kubenswrapper[4763]: I0131 14:57:55.438576 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:57:55 crc kubenswrapper[4763]: I0131 14:57:55.827177 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 14:57:55 crc kubenswrapper[4763]: I0131 14:57:55.948241 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/985646ce-82c3-4387-8f8d-bf1ac731426c-kube-api-access\") pod \"985646ce-82c3-4387-8f8d-bf1ac731426c\" (UID: \"985646ce-82c3-4387-8f8d-bf1ac731426c\") " Jan 31 14:57:55 crc kubenswrapper[4763]: I0131 14:57:55.948370 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/985646ce-82c3-4387-8f8d-bf1ac731426c-kubelet-dir\") pod \"985646ce-82c3-4387-8f8d-bf1ac731426c\" (UID: \"985646ce-82c3-4387-8f8d-bf1ac731426c\") " Jan 31 14:57:55 crc kubenswrapper[4763]: I0131 14:57:55.948572 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/985646ce-82c3-4387-8f8d-bf1ac731426c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "985646ce-82c3-4387-8f8d-bf1ac731426c" (UID: "985646ce-82c3-4387-8f8d-bf1ac731426c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:57:55 crc kubenswrapper[4763]: I0131 14:57:55.953239 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/985646ce-82c3-4387-8f8d-bf1ac731426c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "985646ce-82c3-4387-8f8d-bf1ac731426c" (UID: "985646ce-82c3-4387-8f8d-bf1ac731426c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:57:56 crc kubenswrapper[4763]: I0131 14:57:56.001193 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 14:57:56 crc kubenswrapper[4763]: I0131 14:57:56.049370 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/985646ce-82c3-4387-8f8d-bf1ac731426c-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:56 crc kubenswrapper[4763]: I0131 14:57:56.049415 4763 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/985646ce-82c3-4387-8f8d-bf1ac731426c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:56 crc kubenswrapper[4763]: W0131 14:57:56.408936 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode242425d_c262_45a8_b933_a84abec6740e.slice/crio-dcc37e590bdd58dcc1ac284fc11c47be208afeacbdd9d483c5b8907e05e7ba95 WatchSource:0}: Error finding container dcc37e590bdd58dcc1ac284fc11c47be208afeacbdd9d483c5b8907e05e7ba95: Status 404 returned error can't find the container with id dcc37e590bdd58dcc1ac284fc11c47be208afeacbdd9d483c5b8907e05e7ba95 Jan 31 14:57:56 crc kubenswrapper[4763]: I0131 14:57:56.659839 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"985646ce-82c3-4387-8f8d-bf1ac731426c","Type":"ContainerDied","Data":"438aba147fbe4efa4a5da0545ebc6a5f8f702a5e276652ac1c9ea1976cafb170"} Jan 31 14:57:56 crc kubenswrapper[4763]: I0131 14:57:56.660167 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="438aba147fbe4efa4a5da0545ebc6a5f8f702a5e276652ac1c9ea1976cafb170" Jan 31 14:57:56 crc kubenswrapper[4763]: I0131 14:57:56.659951 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 14:57:56 crc kubenswrapper[4763]: I0131 14:57:56.662637 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e242425d-c262-45a8-b933-a84abec6740e","Type":"ContainerStarted","Data":"dcc37e590bdd58dcc1ac284fc11c47be208afeacbdd9d483c5b8907e05e7ba95"} Jan 31 14:57:57 crc kubenswrapper[4763]: I0131 14:57:57.670571 4763 generic.go:334] "Generic (PLEG): container finished" podID="463b0d45-1b3b-46a1-afbd-650fa065b38f" containerID="92b8c543220171f2670cb4b845ea34abe1120ccb97fa02fed33898d7f0ec2081" exitCode=0 Jan 31 14:57:57 crc kubenswrapper[4763]: I0131 14:57:57.670640 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bfmh" event={"ID":"463b0d45-1b3b-46a1-afbd-650fa065b38f","Type":"ContainerDied","Data":"92b8c543220171f2670cb4b845ea34abe1120ccb97fa02fed33898d7f0ec2081"} Jan 31 14:57:57 crc kubenswrapper[4763]: I0131 14:57:57.672925 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e242425d-c262-45a8-b933-a84abec6740e","Type":"ContainerStarted","Data":"441b596e02022ae8016874e04bb2e0f024181a6d82a5065a1b6c5f7422bd492e"} Jan 31 14:57:57 crc kubenswrapper[4763]: I0131 14:57:57.674777 4763 generic.go:334] "Generic (PLEG): container finished" podID="5f3cc890-2041-4983-8501-088c40c22b77" containerID="92f6b12aae342762d6c8699d9c674ad8822a3f42d0a32e49922d1a4c6fd48807" exitCode=0 Jan 31 14:57:57 crc kubenswrapper[4763]: I0131 14:57:57.674812 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxzg8" event={"ID":"5f3cc890-2041-4983-8501-088c40c22b77","Type":"ContainerDied","Data":"92f6b12aae342762d6c8699d9c674ad8822a3f42d0a32e49922d1a4c6fd48807"} Jan 31 14:57:57 crc kubenswrapper[4763]: I0131 14:57:57.728781 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.728765936 podStartE2EDuration="2.728765936s" podCreationTimestamp="2026-01-31 14:57:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:57.726461731 +0000 UTC m=+197.481200024" watchObservedRunningTime="2026-01-31 14:57:57.728765936 +0000 UTC m=+197.483504229" Jan 31 14:57:58 crc kubenswrapper[4763]: I0131 14:57:58.682668 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bfmh" event={"ID":"463b0d45-1b3b-46a1-afbd-650fa065b38f","Type":"ContainerStarted","Data":"fe04bad61c60f03a4fee1545171714c42e40b646ab8446c28838b4204d69daca"} Jan 31 14:57:58 crc kubenswrapper[4763]: I0131 14:57:58.685485 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxzg8" event={"ID":"5f3cc890-2041-4983-8501-088c40c22b77","Type":"ContainerStarted","Data":"6979898503c3db1e52b9a9085e8ed5e42c42873bedab7efaef4db47c207a2bd1"} Jan 31 14:57:58 crc kubenswrapper[4763]: I0131 14:57:58.727431 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4bfmh" podStartSLOduration=8.83060615 podStartE2EDuration="44.727405069s" podCreationTimestamp="2026-01-31 14:57:14 +0000 UTC" firstStartedPulling="2026-01-31 14:57:22.244111688 +0000 UTC m=+161.998849971" lastFinishedPulling="2026-01-31 14:57:58.140910597 +0000 UTC m=+197.895648890" observedRunningTime="2026-01-31 14:57:58.705428271 +0000 UTC m=+198.460166584" watchObservedRunningTime="2026-01-31 14:57:58.727405069 +0000 UTC m=+198.482143382" Jan 31 14:57:58 crc kubenswrapper[4763]: I0131 14:57:58.729385 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wxzg8" podStartSLOduration=8.906854941 podStartE2EDuration="44.729372974s" podCreationTimestamp="2026-01-31 14:57:14 +0000 UTC" firstStartedPulling="2026-01-31 14:57:22.243419957 +0000 UTC m=+161.998158240" lastFinishedPulling="2026-01-31 14:57:58.06593797 +0000 UTC m=+197.820676273" observedRunningTime="2026-01-31 14:57:58.724647001 +0000 UTC m=+198.479385324" watchObservedRunningTime="2026-01-31 14:57:58.729372974 +0000 UTC m=+198.484111317" Jan 31 14:58:00 crc kubenswrapper[4763]: I0131 14:58:00.877929 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8pcvn"] Jan 31 14:58:02 crc kubenswrapper[4763]: I0131 14:58:02.718634 4763 generic.go:334] "Generic (PLEG): container finished" podID="5c097873-7ca4-491d-86c4-31b2ab99d63d" containerID="8ba1d4757c04aeb31be5c80fe8db01bdf4d7160223329e1b7ec3fde4e61ae64b" exitCode=0 Jan 31 14:58:02 crc kubenswrapper[4763]: I0131 14:58:02.721863 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9df4p" event={"ID":"5c097873-7ca4-491d-86c4-31b2ab99d63d","Type":"ContainerDied","Data":"8ba1d4757c04aeb31be5c80fe8db01bdf4d7160223329e1b7ec3fde4e61ae64b"} Jan 31 14:58:04 crc kubenswrapper[4763]: I0131 14:58:04.498044 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wxzg8" Jan 31 14:58:04 crc kubenswrapper[4763]: I0131 14:58:04.498535 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wxzg8" Jan 31 14:58:04 crc kubenswrapper[4763]: I0131 14:58:04.735026 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6ddv" event={"ID":"b8a35a73-67a0-4bb4-9954-46350d31b017","Type":"ContainerStarted","Data":"b8d4bf7575d1a0a38daa94164fa783e10d0f4c3b29cd1769b324fe728df81239"} Jan 31 14:58:04 crc kubenswrapper[4763]: I0131 14:58:04.736732 4763 generic.go:334] "Generic (PLEG): container finished" podID="f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa" containerID="685557cff2acc69b1516ebfc5ddcd84394226cdd6a924fb7fd37630cb88d114b" exitCode=0 Jan 31 14:58:04 crc kubenswrapper[4763]: I0131 14:58:04.736799 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mr7l4" event={"ID":"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa","Type":"ContainerDied","Data":"685557cff2acc69b1516ebfc5ddcd84394226cdd6a924fb7fd37630cb88d114b"} Jan 31 14:58:04 crc kubenswrapper[4763]: I0131 14:58:04.738585 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9df4p" event={"ID":"5c097873-7ca4-491d-86c4-31b2ab99d63d","Type":"ContainerStarted","Data":"34c7f9fccb1c26a38501bfbea4a61629615e747ea42bb481450c6e5c4a088aa5"} Jan 31 14:58:04 crc kubenswrapper[4763]: I0131 14:58:04.740489 4763 generic.go:334] "Generic (PLEG): container finished" podID="1e24d612-62ed-4bd5-8e07-889710d16851" containerID="6feb39515c46869c42bdad8c345d8ddfbd38d5e1b5e876fafd4360ec653e9144" exitCode=0 Jan 31 14:58:04 crc kubenswrapper[4763]: I0131 14:58:04.740527 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-22bkm" event={"ID":"1e24d612-62ed-4bd5-8e07-889710d16851","Type":"ContainerDied","Data":"6feb39515c46869c42bdad8c345d8ddfbd38d5e1b5e876fafd4360ec653e9144"} Jan 31 14:58:04 crc kubenswrapper[4763]: I0131 14:58:04.808685 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9df4p" podStartSLOduration=1.9307628879999998 podStartE2EDuration="53.808671472s" podCreationTimestamp="2026-01-31 14:57:11 +0000 UTC" firstStartedPulling="2026-01-31 14:57:12.266875242 +0000 UTC m=+152.021613535" lastFinishedPulling="2026-01-31 14:58:04.144783816 +0000 UTC m=+203.899522119" observedRunningTime="2026-01-31 14:58:04.78939888 +0000 UTC m=+204.544137173" watchObservedRunningTime="2026-01-31 14:58:04.808671472 +0000 UTC m=+204.563409765" Jan 31 14:58:04 crc kubenswrapper[4763]: I0131 14:58:04.905502 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4bfmh" Jan 31 14:58:04 crc kubenswrapper[4763]: I0131 14:58:04.905570 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4bfmh" Jan 31 14:58:05 crc kubenswrapper[4763]: I0131 14:58:05.747491 4763 generic.go:334] "Generic (PLEG): container finished" podID="2434f0b9-846a-444c-b487-745d4010002b" containerID="f4f94150c6f32dbe928669679c8121afb9a283014f70e46bdad6544c4f2849b1" exitCode=0 Jan 31 14:58:05 crc kubenswrapper[4763]: I0131 14:58:05.747566 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wnmmq" event={"ID":"2434f0b9-846a-444c-b487-745d4010002b","Type":"ContainerDied","Data":"f4f94150c6f32dbe928669679c8121afb9a283014f70e46bdad6544c4f2849b1"} Jan 31 14:58:05 crc kubenswrapper[4763]: I0131 14:58:05.751973 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-22bkm" event={"ID":"1e24d612-62ed-4bd5-8e07-889710d16851","Type":"ContainerStarted","Data":"3c317f540f94ba82b4fdfb126179450e1b652095fe1a8b866dce98145d81447a"} Jan 31 14:58:05 crc kubenswrapper[4763]: I0131 14:58:05.754978 4763 generic.go:334] "Generic (PLEG): container finished" podID="b8a35a73-67a0-4bb4-9954-46350d31b017" containerID="b8d4bf7575d1a0a38daa94164fa783e10d0f4c3b29cd1769b324fe728df81239" exitCode=0 Jan 31 14:58:05 crc kubenswrapper[4763]: I0131 14:58:05.755026 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6ddv" event={"ID":"b8a35a73-67a0-4bb4-9954-46350d31b017","Type":"ContainerDied","Data":"b8d4bf7575d1a0a38daa94164fa783e10d0f4c3b29cd1769b324fe728df81239"} Jan 31 14:58:05 crc kubenswrapper[4763]: I0131 14:58:05.777164 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5"] Jan 31 14:58:05 crc kubenswrapper[4763]: I0131 14:58:05.777814 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" podUID="d5ce857d-a7e2-4d48-8e82-20f4db8e28c8" containerName="controller-manager" containerID="cri-o://e170936a60a2728421300c994629758ecb1f18fd2343bfef6ccce6f63a3ad4b9" gracePeriod=30 Jan 31 14:58:05 crc kubenswrapper[4763]: I0131 14:58:05.795228 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wxzg8" podUID="5f3cc890-2041-4983-8501-088c40c22b77" containerName="registry-server" probeResult="failure" output=< Jan 31 14:58:05 crc kubenswrapper[4763]: timeout: failed to connect service ":50051" within 1s Jan 31 14:58:05 crc kubenswrapper[4763]: > Jan 31 14:58:05 crc kubenswrapper[4763]: I0131 14:58:05.823787 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k"] Jan 31 14:58:05 crc kubenswrapper[4763]: I0131 14:58:05.823993 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" podUID="fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27" containerName="route-controller-manager" containerID="cri-o://d9c7bc88fb3b8b64c3cde989bbf4101e673826378adc8f418f2c372b7ca2d451" gracePeriod=30 Jan 31 14:58:05 crc kubenswrapper[4763]: I0131 14:58:05.845254 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-22bkm" podStartSLOduration=2.626649923 podStartE2EDuration="52.845237561s" podCreationTimestamp="2026-01-31 14:57:13 +0000 UTC" firstStartedPulling="2026-01-31 14:57:15.31944481 +0000 UTC m=+155.074183093" lastFinishedPulling="2026-01-31 14:58:05.538032438 +0000 UTC m=+205.292770731" observedRunningTime="2026-01-31 14:58:05.843708648 +0000 UTC m=+205.598446941" watchObservedRunningTime="2026-01-31 14:58:05.845237561 +0000 UTC m=+205.599975854" Jan 31 14:58:05 crc kubenswrapper[4763]: I0131 14:58:05.960925 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4bfmh" podUID="463b0d45-1b3b-46a1-afbd-650fa065b38f" containerName="registry-server" probeResult="failure" output=< Jan 31 14:58:05 crc kubenswrapper[4763]: timeout: failed to connect service ":50051" within 1s Jan 31 14:58:05 crc kubenswrapper[4763]: > Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.521225 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.610624 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.677879 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-config\") pod \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\" (UID: \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\") " Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.677950 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-serving-cert\") pod \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\" (UID: \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\") " Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.677986 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-client-ca\") pod \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\" (UID: \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\") " Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.678090 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrm55\" (UniqueName: \"kubernetes.io/projected/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-kube-api-access-nrm55\") pod \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\" (UID: \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\") " Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.678834 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-client-ca" (OuterVolumeSpecName: "client-ca") pod "fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27" (UID: "fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.678847 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-config" (OuterVolumeSpecName: "config") pod "fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27" (UID: "fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.685008 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-kube-api-access-nrm55" (OuterVolumeSpecName: "kube-api-access-nrm55") pod "fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27" (UID: "fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27"). InnerVolumeSpecName "kube-api-access-nrm55". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.697847 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27" (UID: "fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.761328 4763 generic.go:334] "Generic (PLEG): container finished" podID="fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27" containerID="d9c7bc88fb3b8b64c3cde989bbf4101e673826378adc8f418f2c372b7ca2d451" exitCode=0 Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.761393 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" event={"ID":"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27","Type":"ContainerDied","Data":"d9c7bc88fb3b8b64c3cde989bbf4101e673826378adc8f418f2c372b7ca2d451"} Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.761514 4763 scope.go:117] "RemoveContainer" containerID="d9c7bc88fb3b8b64c3cde989bbf4101e673826378adc8f418f2c372b7ca2d451" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.761657 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" event={"ID":"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27","Type":"ContainerDied","Data":"73fa7d5ead40a096c0ba2f6504c9a4404caea7c60cd62d26d61630f96fde5c3e"} Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.762683 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.763359 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mr7l4" event={"ID":"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa","Type":"ContainerStarted","Data":"ff231089d225f30c4966262b3b6adccb590c1423c4741d70fcc0d75c37452971"} Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.765343 4763 generic.go:334] "Generic (PLEG): container finished" podID="5a85c02e-9d6e-4d11-be81-242bf4fee8c4" containerID="52a48be71e213b158459bde46ac394986ffd4e69dafbdbf7529020077d9cafb9" exitCode=0 Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.765392 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4m6qg" event={"ID":"5a85c02e-9d6e-4d11-be81-242bf4fee8c4","Type":"ContainerDied","Data":"52a48be71e213b158459bde46ac394986ffd4e69dafbdbf7529020077d9cafb9"} Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.767613 4763 generic.go:334] "Generic (PLEG): container finished" podID="d5ce857d-a7e2-4d48-8e82-20f4db8e28c8" containerID="e170936a60a2728421300c994629758ecb1f18fd2343bfef6ccce6f63a3ad4b9" exitCode=0 Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.767634 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" event={"ID":"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8","Type":"ContainerDied","Data":"e170936a60a2728421300c994629758ecb1f18fd2343bfef6ccce6f63a3ad4b9"} Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.767653 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" event={"ID":"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8","Type":"ContainerDied","Data":"e8430d0dd8e723932922589437322bb5e5478cc04d3605ffd87f3ce8b909e891"} Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.767716 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.778934 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-proxy-ca-bundles\") pod \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.778984 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-client-ca\") pod \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.779017 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sxs7\" (UniqueName: \"kubernetes.io/projected/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-kube-api-access-5sxs7\") pod \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.779046 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-serving-cert\") pod \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.779125 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-config\") pod \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.779351 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.779366 4763 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.779375 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrm55\" (UniqueName: \"kubernetes.io/projected/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-kube-api-access-nrm55\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.779384 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.779758 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-client-ca" (OuterVolumeSpecName: "client-ca") pod "d5ce857d-a7e2-4d48-8e82-20f4db8e28c8" (UID: "d5ce857d-a7e2-4d48-8e82-20f4db8e28c8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.779867 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-config" (OuterVolumeSpecName: "config") pod "d5ce857d-a7e2-4d48-8e82-20f4db8e28c8" (UID: "d5ce857d-a7e2-4d48-8e82-20f4db8e28c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.780549 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d5ce857d-a7e2-4d48-8e82-20f4db8e28c8" (UID: "d5ce857d-a7e2-4d48-8e82-20f4db8e28c8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.783067 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-kube-api-access-5sxs7" (OuterVolumeSpecName: "kube-api-access-5sxs7") pod "d5ce857d-a7e2-4d48-8e82-20f4db8e28c8" (UID: "d5ce857d-a7e2-4d48-8e82-20f4db8e28c8"). InnerVolumeSpecName "kube-api-access-5sxs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.792963 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d5ce857d-a7e2-4d48-8e82-20f4db8e28c8" (UID: "d5ce857d-a7e2-4d48-8e82-20f4db8e28c8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.803251 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mr7l4" podStartSLOduration=3.431062845 podStartE2EDuration="55.803228542s" podCreationTimestamp="2026-01-31 14:57:11 +0000 UTC" firstStartedPulling="2026-01-31 14:57:13.281225983 +0000 UTC m=+153.035964276" lastFinishedPulling="2026-01-31 14:58:05.65339168 +0000 UTC m=+205.408129973" observedRunningTime="2026-01-31 14:58:06.801187944 +0000 UTC m=+206.555926237" watchObservedRunningTime="2026-01-31 14:58:06.803228542 +0000 UTC m=+206.557966845" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.835081 4763 scope.go:117] "RemoveContainer" containerID="d9c7bc88fb3b8b64c3cde989bbf4101e673826378adc8f418f2c372b7ca2d451" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.840626 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k"] Jan 31 14:58:06 crc kubenswrapper[4763]: E0131 14:58:06.841970 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9c7bc88fb3b8b64c3cde989bbf4101e673826378adc8f418f2c372b7ca2d451\": container with ID starting with d9c7bc88fb3b8b64c3cde989bbf4101e673826378adc8f418f2c372b7ca2d451 not found: ID does not exist" containerID="d9c7bc88fb3b8b64c3cde989bbf4101e673826378adc8f418f2c372b7ca2d451" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.842006 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9c7bc88fb3b8b64c3cde989bbf4101e673826378adc8f418f2c372b7ca2d451"} err="failed to get container status \"d9c7bc88fb3b8b64c3cde989bbf4101e673826378adc8f418f2c372b7ca2d451\": rpc error: code = NotFound desc = could not find container \"d9c7bc88fb3b8b64c3cde989bbf4101e673826378adc8f418f2c372b7ca2d451\": container with ID starting with d9c7bc88fb3b8b64c3cde989bbf4101e673826378adc8f418f2c372b7ca2d451 not found: ID does not exist" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.842069 4763 scope.go:117] "RemoveContainer" containerID="e170936a60a2728421300c994629758ecb1f18fd2343bfef6ccce6f63a3ad4b9" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.845572 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k"] Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.865930 4763 scope.go:117] "RemoveContainer" containerID="e170936a60a2728421300c994629758ecb1f18fd2343bfef6ccce6f63a3ad4b9" Jan 31 14:58:06 crc kubenswrapper[4763]: E0131 14:58:06.866399 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e170936a60a2728421300c994629758ecb1f18fd2343bfef6ccce6f63a3ad4b9\": container with ID starting with e170936a60a2728421300c994629758ecb1f18fd2343bfef6ccce6f63a3ad4b9 not found: ID does not exist" containerID="e170936a60a2728421300c994629758ecb1f18fd2343bfef6ccce6f63a3ad4b9" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.866446 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e170936a60a2728421300c994629758ecb1f18fd2343bfef6ccce6f63a3ad4b9"} err="failed to get container status \"e170936a60a2728421300c994629758ecb1f18fd2343bfef6ccce6f63a3ad4b9\": rpc error: code = NotFound desc = could not find container \"e170936a60a2728421300c994629758ecb1f18fd2343bfef6ccce6f63a3ad4b9\": container with ID starting with e170936a60a2728421300c994629758ecb1f18fd2343bfef6ccce6f63a3ad4b9 not found: ID does not exist" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.880215 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.880245 4763 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.880256 4763 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.880264 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sxs7\" (UniqueName: \"kubernetes.io/projected/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-kube-api-access-5sxs7\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.880273 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.048651 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27" path="/var/lib/kubelet/pods/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27/volumes" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.083053 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5"] Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.085653 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5"] Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.232481 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-76646487c9-96gk6"] Jan 31 14:58:07 crc kubenswrapper[4763]: E0131 14:58:07.232789 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5ce857d-a7e2-4d48-8e82-20f4db8e28c8" containerName="controller-manager" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.232809 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ce857d-a7e2-4d48-8e82-20f4db8e28c8" containerName="controller-manager" Jan 31 14:58:07 crc kubenswrapper[4763]: E0131 14:58:07.232832 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27" containerName="route-controller-manager" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.232841 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27" containerName="route-controller-manager" Jan 31 14:58:07 crc kubenswrapper[4763]: E0131 14:58:07.232855 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="985646ce-82c3-4387-8f8d-bf1ac731426c" containerName="pruner" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.232863 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="985646ce-82c3-4387-8f8d-bf1ac731426c" containerName="pruner" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.232995 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27" containerName="route-controller-manager" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.233008 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5ce857d-a7e2-4d48-8e82-20f4db8e28c8" containerName="controller-manager" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.233019 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="985646ce-82c3-4387-8f8d-bf1ac731426c" containerName="pruner" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.233484 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.235510 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd"] Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.236298 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.236488 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.236827 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.237088 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.237178 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.237339 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.239307 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.240292 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.240397 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.240764 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.240964 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.241109 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.244809 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.245784 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.247765 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76646487c9-96gk6"] Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.319672 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd"] Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.385184 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f1904fac-b0bf-46bf-b137-8cae4630dc39-proxy-ca-bundles\") pod \"controller-manager-76646487c9-96gk6\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.385240 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1904fac-b0bf-46bf-b137-8cae4630dc39-client-ca\") pod \"controller-manager-76646487c9-96gk6\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.385271 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3107af8a-7bbd-4214-84e6-a3a18e79510f-config\") pod \"route-controller-manager-768768945b-n8gjd\" (UID: \"3107af8a-7bbd-4214-84e6-a3a18e79510f\") " pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.385292 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp7h7\" (UniqueName: \"kubernetes.io/projected/f1904fac-b0bf-46bf-b137-8cae4630dc39-kube-api-access-sp7h7\") pod \"controller-manager-76646487c9-96gk6\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.385317 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1904fac-b0bf-46bf-b137-8cae4630dc39-config\") pod \"controller-manager-76646487c9-96gk6\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.385353 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d77v7\" (UniqueName: \"kubernetes.io/projected/3107af8a-7bbd-4214-84e6-a3a18e79510f-kube-api-access-d77v7\") pod \"route-controller-manager-768768945b-n8gjd\" (UID: \"3107af8a-7bbd-4214-84e6-a3a18e79510f\") " pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.385375 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3107af8a-7bbd-4214-84e6-a3a18e79510f-serving-cert\") pod \"route-controller-manager-768768945b-n8gjd\" (UID: \"3107af8a-7bbd-4214-84e6-a3a18e79510f\") " pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.385420 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3107af8a-7bbd-4214-84e6-a3a18e79510f-client-ca\") pod \"route-controller-manager-768768945b-n8gjd\" (UID: \"3107af8a-7bbd-4214-84e6-a3a18e79510f\") " pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.385446 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1904fac-b0bf-46bf-b137-8cae4630dc39-serving-cert\") pod \"controller-manager-76646487c9-96gk6\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.486494 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f1904fac-b0bf-46bf-b137-8cae4630dc39-proxy-ca-bundles\") pod \"controller-manager-76646487c9-96gk6\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.490797 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1904fac-b0bf-46bf-b137-8cae4630dc39-client-ca\") pod \"controller-manager-76646487c9-96gk6\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.490926 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3107af8a-7bbd-4214-84e6-a3a18e79510f-config\") pod \"route-controller-manager-768768945b-n8gjd\" (UID: \"3107af8a-7bbd-4214-84e6-a3a18e79510f\") " pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.490958 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp7h7\" (UniqueName: \"kubernetes.io/projected/f1904fac-b0bf-46bf-b137-8cae4630dc39-kube-api-access-sp7h7\") pod \"controller-manager-76646487c9-96gk6\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.491015 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1904fac-b0bf-46bf-b137-8cae4630dc39-config\") pod \"controller-manager-76646487c9-96gk6\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.491086 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d77v7\" (UniqueName: \"kubernetes.io/projected/3107af8a-7bbd-4214-84e6-a3a18e79510f-kube-api-access-d77v7\") pod \"route-controller-manager-768768945b-n8gjd\" (UID: \"3107af8a-7bbd-4214-84e6-a3a18e79510f\") " pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.491110 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3107af8a-7bbd-4214-84e6-a3a18e79510f-serving-cert\") pod \"route-controller-manager-768768945b-n8gjd\" (UID: \"3107af8a-7bbd-4214-84e6-a3a18e79510f\") " pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.491192 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3107af8a-7bbd-4214-84e6-a3a18e79510f-client-ca\") pod \"route-controller-manager-768768945b-n8gjd\" (UID: \"3107af8a-7bbd-4214-84e6-a3a18e79510f\") " pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.491226 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1904fac-b0bf-46bf-b137-8cae4630dc39-serving-cert\") pod \"controller-manager-76646487c9-96gk6\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.492010 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f1904fac-b0bf-46bf-b137-8cae4630dc39-proxy-ca-bundles\") pod \"controller-manager-76646487c9-96gk6\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.492038 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1904fac-b0bf-46bf-b137-8cae4630dc39-client-ca\") pod \"controller-manager-76646487c9-96gk6\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.492819 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3107af8a-7bbd-4214-84e6-a3a18e79510f-client-ca\") pod \"route-controller-manager-768768945b-n8gjd\" (UID: \"3107af8a-7bbd-4214-84e6-a3a18e79510f\") " pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.493092 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3107af8a-7bbd-4214-84e6-a3a18e79510f-config\") pod \"route-controller-manager-768768945b-n8gjd\" (UID: \"3107af8a-7bbd-4214-84e6-a3a18e79510f\") " pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.496598 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1904fac-b0bf-46bf-b137-8cae4630dc39-config\") pod \"controller-manager-76646487c9-96gk6\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.498511 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1904fac-b0bf-46bf-b137-8cae4630dc39-serving-cert\") pod \"controller-manager-76646487c9-96gk6\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.498521 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3107af8a-7bbd-4214-84e6-a3a18e79510f-serving-cert\") pod \"route-controller-manager-768768945b-n8gjd\" (UID: \"3107af8a-7bbd-4214-84e6-a3a18e79510f\") " pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.516200 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp7h7\" (UniqueName: \"kubernetes.io/projected/f1904fac-b0bf-46bf-b137-8cae4630dc39-kube-api-access-sp7h7\") pod \"controller-manager-76646487c9-96gk6\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.517864 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d77v7\" (UniqueName: \"kubernetes.io/projected/3107af8a-7bbd-4214-84e6-a3a18e79510f-kube-api-access-d77v7\") pod \"route-controller-manager-768768945b-n8gjd\" (UID: \"3107af8a-7bbd-4214-84e6-a3a18e79510f\") " pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.614819 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.629285 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.776792 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6ddv" event={"ID":"b8a35a73-67a0-4bb4-9954-46350d31b017","Type":"ContainerStarted","Data":"b11296a17e50e5a5d66eaf92faf74300be0a5405e0878ffb5ba19adff4be04ed"} Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.794163 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k6ddv" podStartSLOduration=3.699863981 podStartE2EDuration="57.794149188s" podCreationTimestamp="2026-01-31 14:57:10 +0000 UTC" firstStartedPulling="2026-01-31 14:57:12.244546875 +0000 UTC m=+151.999285168" lastFinishedPulling="2026-01-31 14:58:06.338832082 +0000 UTC m=+206.093570375" observedRunningTime="2026-01-31 14:58:07.792962555 +0000 UTC m=+207.547700848" watchObservedRunningTime="2026-01-31 14:58:07.794149188 +0000 UTC m=+207.548887481" Jan 31 14:58:09 crc kubenswrapper[4763]: I0131 14:58:09.057285 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5ce857d-a7e2-4d48-8e82-20f4db8e28c8" path="/var/lib/kubelet/pods/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8/volumes" Jan 31 14:58:09 crc kubenswrapper[4763]: I0131 14:58:09.653018 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd"] Jan 31 14:58:09 crc kubenswrapper[4763]: I0131 14:58:09.656376 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76646487c9-96gk6"] Jan 31 14:58:09 crc kubenswrapper[4763]: W0131 14:58:09.661904 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1904fac_b0bf_46bf_b137_8cae4630dc39.slice/crio-6d86f7ce32eb06c34ecceb3c5a5ea319381fd9969dad831555e155e547b2cd01 WatchSource:0}: Error finding container 6d86f7ce32eb06c34ecceb3c5a5ea319381fd9969dad831555e155e547b2cd01: Status 404 returned error can't find the container with id 6d86f7ce32eb06c34ecceb3c5a5ea319381fd9969dad831555e155e547b2cd01 Jan 31 14:58:09 crc kubenswrapper[4763]: W0131 14:58:09.663676 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3107af8a_7bbd_4214_84e6_a3a18e79510f.slice/crio-250afbceb4491af0b71e1680cfee8a18ebd2db8880f0ea2683b0332562116b45 WatchSource:0}: Error finding container 250afbceb4491af0b71e1680cfee8a18ebd2db8880f0ea2683b0332562116b45: Status 404 returned error can't find the container with id 250afbceb4491af0b71e1680cfee8a18ebd2db8880f0ea2683b0332562116b45 Jan 31 14:58:09 crc kubenswrapper[4763]: I0131 14:58:09.789958 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wnmmq" event={"ID":"2434f0b9-846a-444c-b487-745d4010002b","Type":"ContainerStarted","Data":"0711770c6696d1677cd00d3fae5f53e4c8b8978c7d7d63abe1ed9abf767c4b97"} Jan 31 14:58:09 crc kubenswrapper[4763]: I0131 14:58:09.790952 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" event={"ID":"3107af8a-7bbd-4214-84e6-a3a18e79510f","Type":"ContainerStarted","Data":"250afbceb4491af0b71e1680cfee8a18ebd2db8880f0ea2683b0332562116b45"} Jan 31 14:58:09 crc kubenswrapper[4763]: I0131 14:58:09.793260 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" event={"ID":"f1904fac-b0bf-46bf-b137-8cae4630dc39","Type":"ContainerStarted","Data":"6d86f7ce32eb06c34ecceb3c5a5ea319381fd9969dad831555e155e547b2cd01"} Jan 31 14:58:09 crc kubenswrapper[4763]: I0131 14:58:09.806774 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wnmmq" podStartSLOduration=1.993399132 podStartE2EDuration="56.806756286s" podCreationTimestamp="2026-01-31 14:57:13 +0000 UTC" firstStartedPulling="2026-01-31 14:57:14.292211937 +0000 UTC m=+154.046950240" lastFinishedPulling="2026-01-31 14:58:09.105569091 +0000 UTC m=+208.860307394" observedRunningTime="2026-01-31 14:58:09.803282288 +0000 UTC m=+209.558020581" watchObservedRunningTime="2026-01-31 14:58:09.806756286 +0000 UTC m=+209.561494579" Jan 31 14:58:11 crc kubenswrapper[4763]: I0131 14:58:11.324031 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k6ddv" Jan 31 14:58:11 crc kubenswrapper[4763]: I0131 14:58:11.324429 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k6ddv" Jan 31 14:58:11 crc kubenswrapper[4763]: I0131 14:58:11.420545 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k6ddv" Jan 31 14:58:11 crc kubenswrapper[4763]: I0131 14:58:11.684296 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9df4p" Jan 31 14:58:11 crc kubenswrapper[4763]: I0131 14:58:11.684347 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9df4p" Jan 31 14:58:11 crc kubenswrapper[4763]: I0131 14:58:11.741106 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9df4p" Jan 31 14:58:11 crc kubenswrapper[4763]: I0131 14:58:11.847607 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k6ddv" Jan 31 14:58:11 crc kubenswrapper[4763]: I0131 14:58:11.849047 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9df4p" Jan 31 14:58:11 crc kubenswrapper[4763]: I0131 14:58:11.909712 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mr7l4" Jan 31 14:58:11 crc kubenswrapper[4763]: I0131 14:58:11.910066 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mr7l4" Jan 31 14:58:11 crc kubenswrapper[4763]: I0131 14:58:11.945915 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mr7l4" Jan 31 14:58:12 crc kubenswrapper[4763]: I0131 14:58:12.822580 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" event={"ID":"f1904fac-b0bf-46bf-b137-8cae4630dc39","Type":"ContainerStarted","Data":"7d5409a4eb9aeba180cdcea87c6fa8d711fedc996ee925bdc8fcb33bd1d7680e"} Jan 31 14:58:12 crc kubenswrapper[4763]: I0131 14:58:12.823802 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:12 crc kubenswrapper[4763]: I0131 14:58:12.825929 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4m6qg" event={"ID":"5a85c02e-9d6e-4d11-be81-242bf4fee8c4","Type":"ContainerStarted","Data":"743edce5c6a64dd51dde28d7391bbad2b836f23024b81595df21ceb2f4bc226a"} Jan 31 14:58:12 crc kubenswrapper[4763]: I0131 14:58:12.827175 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" event={"ID":"3107af8a-7bbd-4214-84e6-a3a18e79510f","Type":"ContainerStarted","Data":"7be9305ee4de774cd6027ea43532a5c3b6b920732b57a3a527ee8c81f8260bbb"} Jan 31 14:58:12 crc kubenswrapper[4763]: I0131 14:58:12.837943 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:12 crc kubenswrapper[4763]: I0131 14:58:12.846545 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" podStartSLOduration=7.846527188 podStartE2EDuration="7.846527188s" podCreationTimestamp="2026-01-31 14:58:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:58:12.842970578 +0000 UTC m=+212.597708861" watchObservedRunningTime="2026-01-31 14:58:12.846527188 +0000 UTC m=+212.601265481" Jan 31 14:58:12 crc kubenswrapper[4763]: I0131 14:58:12.908395 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mr7l4" Jan 31 14:58:13 crc kubenswrapper[4763]: I0131 14:58:13.516967 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wnmmq" Jan 31 14:58:13 crc kubenswrapper[4763]: I0131 14:58:13.517043 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wnmmq" Jan 31 14:58:13 crc kubenswrapper[4763]: I0131 14:58:13.565000 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wnmmq" Jan 31 14:58:13 crc kubenswrapper[4763]: I0131 14:58:13.832872 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" Jan 31 14:58:13 crc kubenswrapper[4763]: I0131 14:58:13.840458 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" Jan 31 14:58:13 crc kubenswrapper[4763]: I0131 14:58:13.853981 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" podStartSLOduration=8.853964658 podStartE2EDuration="8.853964658s" podCreationTimestamp="2026-01-31 14:58:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:58:13.852128437 +0000 UTC m=+213.606866720" watchObservedRunningTime="2026-01-31 14:58:13.853964658 +0000 UTC m=+213.608702951" Jan 31 14:58:13 crc kubenswrapper[4763]: I0131 14:58:13.890723 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4m6qg" podStartSLOduration=4.091857244 podStartE2EDuration="1m2.890708351s" podCreationTimestamp="2026-01-31 14:57:11 +0000 UTC" firstStartedPulling="2026-01-31 14:57:12.249608883 +0000 UTC m=+152.004347176" lastFinishedPulling="2026-01-31 14:58:11.04845999 +0000 UTC m=+210.803198283" observedRunningTime="2026-01-31 14:58:13.887288735 +0000 UTC m=+213.642027068" watchObservedRunningTime="2026-01-31 14:58:13.890708351 +0000 UTC m=+213.645446634" Jan 31 14:58:13 crc kubenswrapper[4763]: I0131 14:58:13.904216 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-22bkm" Jan 31 14:58:13 crc kubenswrapper[4763]: I0131 14:58:13.904283 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-22bkm" Jan 31 14:58:13 crc kubenswrapper[4763]: I0131 14:58:13.950780 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-22bkm" Jan 31 14:58:14 crc kubenswrapper[4763]: I0131 14:58:14.176958 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 14:58:14 crc kubenswrapper[4763]: I0131 14:58:14.177018 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 14:58:14 crc kubenswrapper[4763]: I0131 14:58:14.177063 4763 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 14:58:14 crc kubenswrapper[4763]: I0131 14:58:14.177562 4763 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb"} pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 14:58:14 crc kubenswrapper[4763]: I0131 14:58:14.177616 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" containerID="cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb" gracePeriod=600 Jan 31 14:58:14 crc kubenswrapper[4763]: I0131 14:58:14.305833 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mr7l4"] Jan 31 14:58:14 crc kubenswrapper[4763]: I0131 14:58:14.559049 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wxzg8" Jan 31 14:58:14 crc kubenswrapper[4763]: I0131 14:58:14.622856 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wxzg8" Jan 31 14:58:14 crc kubenswrapper[4763]: I0131 14:58:14.854036 4763 generic.go:334] "Generic (PLEG): container finished" podID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerID="3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb" exitCode=0 Jan 31 14:58:14 crc kubenswrapper[4763]: I0131 14:58:14.854561 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerDied","Data":"3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb"} Jan 31 14:58:14 crc kubenswrapper[4763]: I0131 14:58:14.930649 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-22bkm" Jan 31 14:58:14 crc kubenswrapper[4763]: I0131 14:58:14.977764 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4bfmh" Jan 31 14:58:15 crc kubenswrapper[4763]: I0131 14:58:15.020150 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4bfmh" Jan 31 14:58:15 crc kubenswrapper[4763]: I0131 14:58:15.860949 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mr7l4" podUID="f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa" containerName="registry-server" containerID="cri-o://ff231089d225f30c4966262b3b6adccb590c1423c4741d70fcc0d75c37452971" gracePeriod=2 Jan 31 14:58:16 crc kubenswrapper[4763]: I0131 14:58:16.716899 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-22bkm"] Jan 31 14:58:16 crc kubenswrapper[4763]: I0131 14:58:16.867192 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-22bkm" podUID="1e24d612-62ed-4bd5-8e07-889710d16851" containerName="registry-server" containerID="cri-o://3c317f540f94ba82b4fdfb126179450e1b652095fe1a8b866dce98145d81447a" gracePeriod=2 Jan 31 14:58:17 crc kubenswrapper[4763]: I0131 14:58:17.873178 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerStarted","Data":"b55482ed2ed4c41f6d8a4ff378f658a5880b1596f1e3f6ef24577f1114937a3b"} Jan 31 14:58:18 crc kubenswrapper[4763]: I0131 14:58:18.884031 4763 generic.go:334] "Generic (PLEG): container finished" podID="f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa" containerID="ff231089d225f30c4966262b3b6adccb590c1423c4741d70fcc0d75c37452971" exitCode=0 Jan 31 14:58:18 crc kubenswrapper[4763]: I0131 14:58:18.884095 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mr7l4" event={"ID":"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa","Type":"ContainerDied","Data":"ff231089d225f30c4966262b3b6adccb590c1423c4741d70fcc0d75c37452971"} Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.107210 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4bfmh"] Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.107632 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4bfmh" podUID="463b0d45-1b3b-46a1-afbd-650fa065b38f" containerName="registry-server" containerID="cri-o://fe04bad61c60f03a4fee1545171714c42e40b646ab8446c28838b4204d69daca" gracePeriod=2 Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.447249 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mr7l4" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.561088 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4rwh\" (UniqueName: \"kubernetes.io/projected/f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa-kube-api-access-m4rwh\") pod \"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa\" (UID: \"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa\") " Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.562355 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa-catalog-content\") pod \"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa\" (UID: \"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa\") " Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.562387 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa-utilities\") pod \"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa\" (UID: \"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa\") " Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.563565 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa-utilities" (OuterVolumeSpecName: "utilities") pod "f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa" (UID: "f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.567684 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa-kube-api-access-m4rwh" (OuterVolumeSpecName: "kube-api-access-m4rwh") pod "f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa" (UID: "f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa"). InnerVolumeSpecName "kube-api-access-m4rwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.612542 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4bfmh" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.626346 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa" (UID: "f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.654624 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-22bkm" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.663668 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4rwh\" (UniqueName: \"kubernetes.io/projected/f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa-kube-api-access-m4rwh\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.663715 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.663726 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.765032 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e24d612-62ed-4bd5-8e07-889710d16851-utilities\") pod \"1e24d612-62ed-4bd5-8e07-889710d16851\" (UID: \"1e24d612-62ed-4bd5-8e07-889710d16851\") " Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.765116 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dshgn\" (UniqueName: \"kubernetes.io/projected/1e24d612-62ed-4bd5-8e07-889710d16851-kube-api-access-dshgn\") pod \"1e24d612-62ed-4bd5-8e07-889710d16851\" (UID: \"1e24d612-62ed-4bd5-8e07-889710d16851\") " Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.765203 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/463b0d45-1b3b-46a1-afbd-650fa065b38f-catalog-content\") pod \"463b0d45-1b3b-46a1-afbd-650fa065b38f\" (UID: \"463b0d45-1b3b-46a1-afbd-650fa065b38f\") " Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.765229 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9fps\" (UniqueName: \"kubernetes.io/projected/463b0d45-1b3b-46a1-afbd-650fa065b38f-kube-api-access-g9fps\") pod \"463b0d45-1b3b-46a1-afbd-650fa065b38f\" (UID: \"463b0d45-1b3b-46a1-afbd-650fa065b38f\") " Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.765262 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/463b0d45-1b3b-46a1-afbd-650fa065b38f-utilities\") pod \"463b0d45-1b3b-46a1-afbd-650fa065b38f\" (UID: \"463b0d45-1b3b-46a1-afbd-650fa065b38f\") " Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.765290 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e24d612-62ed-4bd5-8e07-889710d16851-catalog-content\") pod \"1e24d612-62ed-4bd5-8e07-889710d16851\" (UID: \"1e24d612-62ed-4bd5-8e07-889710d16851\") " Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.766570 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e24d612-62ed-4bd5-8e07-889710d16851-utilities" (OuterVolumeSpecName: "utilities") pod "1e24d612-62ed-4bd5-8e07-889710d16851" (UID: "1e24d612-62ed-4bd5-8e07-889710d16851"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.770238 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/463b0d45-1b3b-46a1-afbd-650fa065b38f-utilities" (OuterVolumeSpecName: "utilities") pod "463b0d45-1b3b-46a1-afbd-650fa065b38f" (UID: "463b0d45-1b3b-46a1-afbd-650fa065b38f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.770253 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/463b0d45-1b3b-46a1-afbd-650fa065b38f-kube-api-access-g9fps" (OuterVolumeSpecName: "kube-api-access-g9fps") pod "463b0d45-1b3b-46a1-afbd-650fa065b38f" (UID: "463b0d45-1b3b-46a1-afbd-650fa065b38f"). InnerVolumeSpecName "kube-api-access-g9fps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.770390 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e24d612-62ed-4bd5-8e07-889710d16851-kube-api-access-dshgn" (OuterVolumeSpecName: "kube-api-access-dshgn") pod "1e24d612-62ed-4bd5-8e07-889710d16851" (UID: "1e24d612-62ed-4bd5-8e07-889710d16851"). InnerVolumeSpecName "kube-api-access-dshgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.782833 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e24d612-62ed-4bd5-8e07-889710d16851-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.782945 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dshgn\" (UniqueName: \"kubernetes.io/projected/1e24d612-62ed-4bd5-8e07-889710d16851-kube-api-access-dshgn\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.783006 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9fps\" (UniqueName: \"kubernetes.io/projected/463b0d45-1b3b-46a1-afbd-650fa065b38f-kube-api-access-g9fps\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.783060 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/463b0d45-1b3b-46a1-afbd-650fa065b38f-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.787427 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e24d612-62ed-4bd5-8e07-889710d16851-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e24d612-62ed-4bd5-8e07-889710d16851" (UID: "1e24d612-62ed-4bd5-8e07-889710d16851"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.884418 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e24d612-62ed-4bd5-8e07-889710d16851-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.896377 4763 generic.go:334] "Generic (PLEG): container finished" podID="1e24d612-62ed-4bd5-8e07-889710d16851" containerID="3c317f540f94ba82b4fdfb126179450e1b652095fe1a8b866dce98145d81447a" exitCode=0 Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.896473 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-22bkm" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.896496 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-22bkm" event={"ID":"1e24d612-62ed-4bd5-8e07-889710d16851","Type":"ContainerDied","Data":"3c317f540f94ba82b4fdfb126179450e1b652095fe1a8b866dce98145d81447a"} Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.896570 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-22bkm" event={"ID":"1e24d612-62ed-4bd5-8e07-889710d16851","Type":"ContainerDied","Data":"c82ecb291538ae24976b93059fe5012e9eba52e1be048225cfff7c40417c81ce"} Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.896602 4763 scope.go:117] "RemoveContainer" containerID="3c317f540f94ba82b4fdfb126179450e1b652095fe1a8b866dce98145d81447a" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.900996 4763 generic.go:334] "Generic (PLEG): container finished" podID="463b0d45-1b3b-46a1-afbd-650fa065b38f" containerID="fe04bad61c60f03a4fee1545171714c42e40b646ab8446c28838b4204d69daca" exitCode=0 Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.901093 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4bfmh" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.901108 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bfmh" event={"ID":"463b0d45-1b3b-46a1-afbd-650fa065b38f","Type":"ContainerDied","Data":"fe04bad61c60f03a4fee1545171714c42e40b646ab8446c28838b4204d69daca"} Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.901756 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bfmh" event={"ID":"463b0d45-1b3b-46a1-afbd-650fa065b38f","Type":"ContainerDied","Data":"64d775f9e024e4389d9fed6e15e2915fc320014ae014969ebec947a0370a2ca9"} Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.907024 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mr7l4" event={"ID":"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa","Type":"ContainerDied","Data":"29747698a85a5ba3ecc912242e839ad570821011bfaa89976cf716705b0e4372"} Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.907140 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mr7l4" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.921443 4763 scope.go:117] "RemoveContainer" containerID="6feb39515c46869c42bdad8c345d8ddfbd38d5e1b5e876fafd4360ec653e9144" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.952381 4763 scope.go:117] "RemoveContainer" containerID="b5fcefa9833b3ce59a8e38ebaf0025b6e3f0e84dd47d373130e3a6568a1fb10f" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.955759 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-22bkm"] Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.975772 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-22bkm"] Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.978055 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mr7l4"] Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.981761 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mr7l4"] Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.982167 4763 scope.go:117] "RemoveContainer" containerID="3c317f540f94ba82b4fdfb126179450e1b652095fe1a8b866dce98145d81447a" Jan 31 14:58:19 crc kubenswrapper[4763]: E0131 14:58:19.982823 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c317f540f94ba82b4fdfb126179450e1b652095fe1a8b866dce98145d81447a\": container with ID starting with 3c317f540f94ba82b4fdfb126179450e1b652095fe1a8b866dce98145d81447a not found: ID does not exist" containerID="3c317f540f94ba82b4fdfb126179450e1b652095fe1a8b866dce98145d81447a" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.982864 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c317f540f94ba82b4fdfb126179450e1b652095fe1a8b866dce98145d81447a"} err="failed to get container status \"3c317f540f94ba82b4fdfb126179450e1b652095fe1a8b866dce98145d81447a\": rpc error: code = NotFound desc = could not find container \"3c317f540f94ba82b4fdfb126179450e1b652095fe1a8b866dce98145d81447a\": container with ID starting with 3c317f540f94ba82b4fdfb126179450e1b652095fe1a8b866dce98145d81447a not found: ID does not exist" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.982896 4763 scope.go:117] "RemoveContainer" containerID="6feb39515c46869c42bdad8c345d8ddfbd38d5e1b5e876fafd4360ec653e9144" Jan 31 14:58:19 crc kubenswrapper[4763]: E0131 14:58:19.983414 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6feb39515c46869c42bdad8c345d8ddfbd38d5e1b5e876fafd4360ec653e9144\": container with ID starting with 6feb39515c46869c42bdad8c345d8ddfbd38d5e1b5e876fafd4360ec653e9144 not found: ID does not exist" containerID="6feb39515c46869c42bdad8c345d8ddfbd38d5e1b5e876fafd4360ec653e9144" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.983480 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6feb39515c46869c42bdad8c345d8ddfbd38d5e1b5e876fafd4360ec653e9144"} err="failed to get container status \"6feb39515c46869c42bdad8c345d8ddfbd38d5e1b5e876fafd4360ec653e9144\": rpc error: code = NotFound desc = could not find container \"6feb39515c46869c42bdad8c345d8ddfbd38d5e1b5e876fafd4360ec653e9144\": container with ID starting with 6feb39515c46869c42bdad8c345d8ddfbd38d5e1b5e876fafd4360ec653e9144 not found: ID does not exist" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.983525 4763 scope.go:117] "RemoveContainer" containerID="b5fcefa9833b3ce59a8e38ebaf0025b6e3f0e84dd47d373130e3a6568a1fb10f" Jan 31 14:58:19 crc kubenswrapper[4763]: E0131 14:58:19.984116 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5fcefa9833b3ce59a8e38ebaf0025b6e3f0e84dd47d373130e3a6568a1fb10f\": container with ID starting with b5fcefa9833b3ce59a8e38ebaf0025b6e3f0e84dd47d373130e3a6568a1fb10f not found: ID does not exist" containerID="b5fcefa9833b3ce59a8e38ebaf0025b6e3f0e84dd47d373130e3a6568a1fb10f" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.984166 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5fcefa9833b3ce59a8e38ebaf0025b6e3f0e84dd47d373130e3a6568a1fb10f"} err="failed to get container status \"b5fcefa9833b3ce59a8e38ebaf0025b6e3f0e84dd47d373130e3a6568a1fb10f\": rpc error: code = NotFound desc = could not find container \"b5fcefa9833b3ce59a8e38ebaf0025b6e3f0e84dd47d373130e3a6568a1fb10f\": container with ID starting with b5fcefa9833b3ce59a8e38ebaf0025b6e3f0e84dd47d373130e3a6568a1fb10f not found: ID does not exist" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.984207 4763 scope.go:117] "RemoveContainer" containerID="fe04bad61c60f03a4fee1545171714c42e40b646ab8446c28838b4204d69daca" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.999395 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/463b0d45-1b3b-46a1-afbd-650fa065b38f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "463b0d45-1b3b-46a1-afbd-650fa065b38f" (UID: "463b0d45-1b3b-46a1-afbd-650fa065b38f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:58:20 crc kubenswrapper[4763]: I0131 14:58:20.009949 4763 scope.go:117] "RemoveContainer" containerID="92b8c543220171f2670cb4b845ea34abe1120ccb97fa02fed33898d7f0ec2081" Jan 31 14:58:20 crc kubenswrapper[4763]: I0131 14:58:20.033432 4763 scope.go:117] "RemoveContainer" containerID="af0144e6d97f243992322cb824634e0232c08a887c39ce09faf5551f212c4c7e" Jan 31 14:58:20 crc kubenswrapper[4763]: I0131 14:58:20.061039 4763 scope.go:117] "RemoveContainer" containerID="fe04bad61c60f03a4fee1545171714c42e40b646ab8446c28838b4204d69daca" Jan 31 14:58:20 crc kubenswrapper[4763]: E0131 14:58:20.061630 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe04bad61c60f03a4fee1545171714c42e40b646ab8446c28838b4204d69daca\": container with ID starting with fe04bad61c60f03a4fee1545171714c42e40b646ab8446c28838b4204d69daca not found: ID does not exist" containerID="fe04bad61c60f03a4fee1545171714c42e40b646ab8446c28838b4204d69daca" Jan 31 14:58:20 crc kubenswrapper[4763]: I0131 14:58:20.061691 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe04bad61c60f03a4fee1545171714c42e40b646ab8446c28838b4204d69daca"} err="failed to get container status \"fe04bad61c60f03a4fee1545171714c42e40b646ab8446c28838b4204d69daca\": rpc error: code = NotFound desc = could not find container \"fe04bad61c60f03a4fee1545171714c42e40b646ab8446c28838b4204d69daca\": container with ID starting with fe04bad61c60f03a4fee1545171714c42e40b646ab8446c28838b4204d69daca not found: ID does not exist" Jan 31 14:58:20 crc kubenswrapper[4763]: I0131 14:58:20.061853 4763 scope.go:117] "RemoveContainer" containerID="92b8c543220171f2670cb4b845ea34abe1120ccb97fa02fed33898d7f0ec2081" Jan 31 14:58:20 crc kubenswrapper[4763]: E0131 14:58:20.062315 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92b8c543220171f2670cb4b845ea34abe1120ccb97fa02fed33898d7f0ec2081\": container with ID starting with 92b8c543220171f2670cb4b845ea34abe1120ccb97fa02fed33898d7f0ec2081 not found: ID does not exist" containerID="92b8c543220171f2670cb4b845ea34abe1120ccb97fa02fed33898d7f0ec2081" Jan 31 14:58:20 crc kubenswrapper[4763]: I0131 14:58:20.062382 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92b8c543220171f2670cb4b845ea34abe1120ccb97fa02fed33898d7f0ec2081"} err="failed to get container status \"92b8c543220171f2670cb4b845ea34abe1120ccb97fa02fed33898d7f0ec2081\": rpc error: code = NotFound desc = could not find container \"92b8c543220171f2670cb4b845ea34abe1120ccb97fa02fed33898d7f0ec2081\": container with ID starting with 92b8c543220171f2670cb4b845ea34abe1120ccb97fa02fed33898d7f0ec2081 not found: ID does not exist" Jan 31 14:58:20 crc kubenswrapper[4763]: I0131 14:58:20.062430 4763 scope.go:117] "RemoveContainer" containerID="af0144e6d97f243992322cb824634e0232c08a887c39ce09faf5551f212c4c7e" Jan 31 14:58:20 crc kubenswrapper[4763]: E0131 14:58:20.062886 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af0144e6d97f243992322cb824634e0232c08a887c39ce09faf5551f212c4c7e\": container with ID starting with af0144e6d97f243992322cb824634e0232c08a887c39ce09faf5551f212c4c7e not found: ID does not exist" containerID="af0144e6d97f243992322cb824634e0232c08a887c39ce09faf5551f212c4c7e" Jan 31 14:58:20 crc kubenswrapper[4763]: I0131 14:58:20.062916 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af0144e6d97f243992322cb824634e0232c08a887c39ce09faf5551f212c4c7e"} err="failed to get container status \"af0144e6d97f243992322cb824634e0232c08a887c39ce09faf5551f212c4c7e\": rpc error: code = NotFound desc = could not find container \"af0144e6d97f243992322cb824634e0232c08a887c39ce09faf5551f212c4c7e\": container with ID starting with af0144e6d97f243992322cb824634e0232c08a887c39ce09faf5551f212c4c7e not found: ID does not exist" Jan 31 14:58:20 crc kubenswrapper[4763]: I0131 14:58:20.062936 4763 scope.go:117] "RemoveContainer" containerID="ff231089d225f30c4966262b3b6adccb590c1423c4741d70fcc0d75c37452971" Jan 31 14:58:20 crc kubenswrapper[4763]: I0131 14:58:20.081419 4763 scope.go:117] "RemoveContainer" containerID="685557cff2acc69b1516ebfc5ddcd84394226cdd6a924fb7fd37630cb88d114b" Jan 31 14:58:20 crc kubenswrapper[4763]: I0131 14:58:20.086959 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/463b0d45-1b3b-46a1-afbd-650fa065b38f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:20 crc kubenswrapper[4763]: I0131 14:58:20.107187 4763 scope.go:117] "RemoveContainer" containerID="cbcf059643f243b97663d9030a999deafef368163de2196a0d497c8e7eabbc09" Jan 31 14:58:20 crc kubenswrapper[4763]: I0131 14:58:20.247897 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4bfmh"] Jan 31 14:58:20 crc kubenswrapper[4763]: I0131 14:58:20.256314 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4bfmh"] Jan 31 14:58:21 crc kubenswrapper[4763]: I0131 14:58:21.049165 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e24d612-62ed-4bd5-8e07-889710d16851" path="/var/lib/kubelet/pods/1e24d612-62ed-4bd5-8e07-889710d16851/volumes" Jan 31 14:58:21 crc kubenswrapper[4763]: I0131 14:58:21.049897 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="463b0d45-1b3b-46a1-afbd-650fa065b38f" path="/var/lib/kubelet/pods/463b0d45-1b3b-46a1-afbd-650fa065b38f/volumes" Jan 31 14:58:21 crc kubenswrapper[4763]: I0131 14:58:21.050590 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa" path="/var/lib/kubelet/pods/f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa/volumes" Jan 31 14:58:21 crc kubenswrapper[4763]: I0131 14:58:21.714447 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4m6qg" Jan 31 14:58:21 crc kubenswrapper[4763]: I0131 14:58:21.714504 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4m6qg" Jan 31 14:58:21 crc kubenswrapper[4763]: I0131 14:58:21.783776 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4m6qg" Jan 31 14:58:21 crc kubenswrapper[4763]: I0131 14:58:21.987962 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4m6qg" Jan 31 14:58:23 crc kubenswrapper[4763]: I0131 14:58:23.592151 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wnmmq" Jan 31 14:58:25 crc kubenswrapper[4763]: I0131 14:58:25.778430 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76646487c9-96gk6"] Jan 31 14:58:25 crc kubenswrapper[4763]: I0131 14:58:25.778629 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" podUID="f1904fac-b0bf-46bf-b137-8cae4630dc39" containerName="controller-manager" containerID="cri-o://7d5409a4eb9aeba180cdcea87c6fa8d711fedc996ee925bdc8fcb33bd1d7680e" gracePeriod=30 Jan 31 14:58:25 crc kubenswrapper[4763]: I0131 14:58:25.875953 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd"] Jan 31 14:58:25 crc kubenswrapper[4763]: I0131 14:58:25.876221 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" podUID="3107af8a-7bbd-4214-84e6-a3a18e79510f" containerName="route-controller-manager" containerID="cri-o://7be9305ee4de774cd6027ea43532a5c3b6b920732b57a3a527ee8c81f8260bbb" gracePeriod=30 Jan 31 14:58:25 crc kubenswrapper[4763]: I0131 14:58:25.906911 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" podUID="275ea46d-7a78-4457-a5ba-7b3000170d0e" containerName="oauth-openshift" containerID="cri-o://003c08e2c29d3c5b59c6a1fd94bb61e81bf58c9d936a5ff50b3a74fa00ee51de" gracePeriod=15 Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.109486 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4m6qg"] Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.109747 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4m6qg" podUID="5a85c02e-9d6e-4d11-be81-242bf4fee8c4" containerName="registry-server" containerID="cri-o://743edce5c6a64dd51dde28d7391bbad2b836f23024b81595df21ceb2f4bc226a" gracePeriod=2 Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.350786 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.379262 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d77v7\" (UniqueName: \"kubernetes.io/projected/3107af8a-7bbd-4214-84e6-a3a18e79510f-kube-api-access-d77v7\") pod \"3107af8a-7bbd-4214-84e6-a3a18e79510f\" (UID: \"3107af8a-7bbd-4214-84e6-a3a18e79510f\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.379314 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3107af8a-7bbd-4214-84e6-a3a18e79510f-client-ca\") pod \"3107af8a-7bbd-4214-84e6-a3a18e79510f\" (UID: \"3107af8a-7bbd-4214-84e6-a3a18e79510f\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.379358 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3107af8a-7bbd-4214-84e6-a3a18e79510f-config\") pod \"3107af8a-7bbd-4214-84e6-a3a18e79510f\" (UID: \"3107af8a-7bbd-4214-84e6-a3a18e79510f\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.379381 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3107af8a-7bbd-4214-84e6-a3a18e79510f-serving-cert\") pod \"3107af8a-7bbd-4214-84e6-a3a18e79510f\" (UID: \"3107af8a-7bbd-4214-84e6-a3a18e79510f\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.380675 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3107af8a-7bbd-4214-84e6-a3a18e79510f-client-ca" (OuterVolumeSpecName: "client-ca") pod "3107af8a-7bbd-4214-84e6-a3a18e79510f" (UID: "3107af8a-7bbd-4214-84e6-a3a18e79510f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.380763 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3107af8a-7bbd-4214-84e6-a3a18e79510f-config" (OuterVolumeSpecName: "config") pod "3107af8a-7bbd-4214-84e6-a3a18e79510f" (UID: "3107af8a-7bbd-4214-84e6-a3a18e79510f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.388067 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3107af8a-7bbd-4214-84e6-a3a18e79510f-kube-api-access-d77v7" (OuterVolumeSpecName: "kube-api-access-d77v7") pod "3107af8a-7bbd-4214-84e6-a3a18e79510f" (UID: "3107af8a-7bbd-4214-84e6-a3a18e79510f"). InnerVolumeSpecName "kube-api-access-d77v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.393070 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3107af8a-7bbd-4214-84e6-a3a18e79510f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3107af8a-7bbd-4214-84e6-a3a18e79510f" (UID: "3107af8a-7bbd-4214-84e6-a3a18e79510f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.480232 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d77v7\" (UniqueName: \"kubernetes.io/projected/3107af8a-7bbd-4214-84e6-a3a18e79510f-kube-api-access-d77v7\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.480262 4763 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3107af8a-7bbd-4214-84e6-a3a18e79510f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.480274 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3107af8a-7bbd-4214-84e6-a3a18e79510f-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.480282 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3107af8a-7bbd-4214-84e6-a3a18e79510f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.481834 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.544299 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4m6qg" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.586982 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-serving-cert\") pod \"275ea46d-7a78-4457-a5ba-7b3000170d0e\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.587029 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-session\") pod \"275ea46d-7a78-4457-a5ba-7b3000170d0e\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.587068 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-template-provider-selection\") pod \"275ea46d-7a78-4457-a5ba-7b3000170d0e\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.587092 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a85c02e-9d6e-4d11-be81-242bf4fee8c4-catalog-content\") pod \"5a85c02e-9d6e-4d11-be81-242bf4fee8c4\" (UID: \"5a85c02e-9d6e-4d11-be81-242bf4fee8c4\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.587124 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-trusted-ca-bundle\") pod \"275ea46d-7a78-4457-a5ba-7b3000170d0e\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.587156 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llk9p\" (UniqueName: \"kubernetes.io/projected/275ea46d-7a78-4457-a5ba-7b3000170d0e-kube-api-access-llk9p\") pod \"275ea46d-7a78-4457-a5ba-7b3000170d0e\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.587185 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-idp-0-file-data\") pod \"275ea46d-7a78-4457-a5ba-7b3000170d0e\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.587201 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-cliconfig\") pod \"275ea46d-7a78-4457-a5ba-7b3000170d0e\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.587223 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-router-certs\") pod \"275ea46d-7a78-4457-a5ba-7b3000170d0e\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.587253 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq9nf\" (UniqueName: \"kubernetes.io/projected/5a85c02e-9d6e-4d11-be81-242bf4fee8c4-kube-api-access-sq9nf\") pod \"5a85c02e-9d6e-4d11-be81-242bf4fee8c4\" (UID: \"5a85c02e-9d6e-4d11-be81-242bf4fee8c4\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.587292 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-service-ca\") pod \"275ea46d-7a78-4457-a5ba-7b3000170d0e\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.587321 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-template-login\") pod \"275ea46d-7a78-4457-a5ba-7b3000170d0e\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.587340 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-ocp-branding-template\") pod \"275ea46d-7a78-4457-a5ba-7b3000170d0e\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.587355 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a85c02e-9d6e-4d11-be81-242bf4fee8c4-utilities\") pod \"5a85c02e-9d6e-4d11-be81-242bf4fee8c4\" (UID: \"5a85c02e-9d6e-4d11-be81-242bf4fee8c4\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.587369 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-audit-policies\") pod \"275ea46d-7a78-4457-a5ba-7b3000170d0e\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.587391 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/275ea46d-7a78-4457-a5ba-7b3000170d0e-audit-dir\") pod \"275ea46d-7a78-4457-a5ba-7b3000170d0e\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.587406 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-template-error\") pod \"275ea46d-7a78-4457-a5ba-7b3000170d0e\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.589203 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "275ea46d-7a78-4457-a5ba-7b3000170d0e" (UID: "275ea46d-7a78-4457-a5ba-7b3000170d0e"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.589585 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "275ea46d-7a78-4457-a5ba-7b3000170d0e" (UID: "275ea46d-7a78-4457-a5ba-7b3000170d0e"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.595030 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "275ea46d-7a78-4457-a5ba-7b3000170d0e" (UID: "275ea46d-7a78-4457-a5ba-7b3000170d0e"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.596333 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a85c02e-9d6e-4d11-be81-242bf4fee8c4-utilities" (OuterVolumeSpecName: "utilities") pod "5a85c02e-9d6e-4d11-be81-242bf4fee8c4" (UID: "5a85c02e-9d6e-4d11-be81-242bf4fee8c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.596421 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "275ea46d-7a78-4457-a5ba-7b3000170d0e" (UID: "275ea46d-7a78-4457-a5ba-7b3000170d0e"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.597908 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "275ea46d-7a78-4457-a5ba-7b3000170d0e" (UID: "275ea46d-7a78-4457-a5ba-7b3000170d0e"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.598494 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "275ea46d-7a78-4457-a5ba-7b3000170d0e" (UID: "275ea46d-7a78-4457-a5ba-7b3000170d0e"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.598543 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/275ea46d-7a78-4457-a5ba-7b3000170d0e-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "275ea46d-7a78-4457-a5ba-7b3000170d0e" (UID: "275ea46d-7a78-4457-a5ba-7b3000170d0e"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.598858 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "275ea46d-7a78-4457-a5ba-7b3000170d0e" (UID: "275ea46d-7a78-4457-a5ba-7b3000170d0e"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.599217 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "275ea46d-7a78-4457-a5ba-7b3000170d0e" (UID: "275ea46d-7a78-4457-a5ba-7b3000170d0e"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.610068 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "275ea46d-7a78-4457-a5ba-7b3000170d0e" (UID: "275ea46d-7a78-4457-a5ba-7b3000170d0e"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.612909 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a85c02e-9d6e-4d11-be81-242bf4fee8c4-kube-api-access-sq9nf" (OuterVolumeSpecName: "kube-api-access-sq9nf") pod "5a85c02e-9d6e-4d11-be81-242bf4fee8c4" (UID: "5a85c02e-9d6e-4d11-be81-242bf4fee8c4"). InnerVolumeSpecName "kube-api-access-sq9nf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.616104 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "275ea46d-7a78-4457-a5ba-7b3000170d0e" (UID: "275ea46d-7a78-4457-a5ba-7b3000170d0e"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.616329 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/275ea46d-7a78-4457-a5ba-7b3000170d0e-kube-api-access-llk9p" (OuterVolumeSpecName: "kube-api-access-llk9p") pod "275ea46d-7a78-4457-a5ba-7b3000170d0e" (UID: "275ea46d-7a78-4457-a5ba-7b3000170d0e"). InnerVolumeSpecName "kube-api-access-llk9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.616466 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "275ea46d-7a78-4457-a5ba-7b3000170d0e" (UID: "275ea46d-7a78-4457-a5ba-7b3000170d0e"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.616483 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "275ea46d-7a78-4457-a5ba-7b3000170d0e" (UID: "275ea46d-7a78-4457-a5ba-7b3000170d0e"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.673839 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a85c02e-9d6e-4d11-be81-242bf4fee8c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a85c02e-9d6e-4d11-be81-242bf4fee8c4" (UID: "5a85c02e-9d6e-4d11-be81-242bf4fee8c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.688439 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sq9nf\" (UniqueName: \"kubernetes.io/projected/5a85c02e-9d6e-4d11-be81-242bf4fee8c4-kube-api-access-sq9nf\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.688643 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.688778 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.688863 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.688956 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a85c02e-9d6e-4d11-be81-242bf4fee8c4-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.689036 4763 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.689122 4763 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/275ea46d-7a78-4457-a5ba-7b3000170d0e-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.689211 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.689289 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.689366 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.689442 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.689530 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a85c02e-9d6e-4d11-be81-242bf4fee8c4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.689610 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.689719 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llk9p\" (UniqueName: \"kubernetes.io/projected/275ea46d-7a78-4457-a5ba-7b3000170d0e-kube-api-access-llk9p\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.689832 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.689922 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.690003 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.803666 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.892213 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1904fac-b0bf-46bf-b137-8cae4630dc39-serving-cert\") pod \"f1904fac-b0bf-46bf-b137-8cae4630dc39\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.892725 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1904fac-b0bf-46bf-b137-8cae4630dc39-client-ca\") pod \"f1904fac-b0bf-46bf-b137-8cae4630dc39\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.892818 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1904fac-b0bf-46bf-b137-8cae4630dc39-config\") pod \"f1904fac-b0bf-46bf-b137-8cae4630dc39\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.892929 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp7h7\" (UniqueName: \"kubernetes.io/projected/f1904fac-b0bf-46bf-b137-8cae4630dc39-kube-api-access-sp7h7\") pod \"f1904fac-b0bf-46bf-b137-8cae4630dc39\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.893032 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f1904fac-b0bf-46bf-b137-8cae4630dc39-proxy-ca-bundles\") pod \"f1904fac-b0bf-46bf-b137-8cae4630dc39\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.893317 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1904fac-b0bf-46bf-b137-8cae4630dc39-client-ca" (OuterVolumeSpecName: "client-ca") pod "f1904fac-b0bf-46bf-b137-8cae4630dc39" (UID: "f1904fac-b0bf-46bf-b137-8cae4630dc39"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.893448 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1904fac-b0bf-46bf-b137-8cae4630dc39-config" (OuterVolumeSpecName: "config") pod "f1904fac-b0bf-46bf-b137-8cae4630dc39" (UID: "f1904fac-b0bf-46bf-b137-8cae4630dc39"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.893782 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1904fac-b0bf-46bf-b137-8cae4630dc39-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f1904fac-b0bf-46bf-b137-8cae4630dc39" (UID: "f1904fac-b0bf-46bf-b137-8cae4630dc39"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.896725 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1904fac-b0bf-46bf-b137-8cae4630dc39-kube-api-access-sp7h7" (OuterVolumeSpecName: "kube-api-access-sp7h7") pod "f1904fac-b0bf-46bf-b137-8cae4630dc39" (UID: "f1904fac-b0bf-46bf-b137-8cae4630dc39"). InnerVolumeSpecName "kube-api-access-sp7h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.900839 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1904fac-b0bf-46bf-b137-8cae4630dc39-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f1904fac-b0bf-46bf-b137-8cae4630dc39" (UID: "f1904fac-b0bf-46bf-b137-8cae4630dc39"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.963901 4763 generic.go:334] "Generic (PLEG): container finished" podID="3107af8a-7bbd-4214-84e6-a3a18e79510f" containerID="7be9305ee4de774cd6027ea43532a5c3b6b920732b57a3a527ee8c81f8260bbb" exitCode=0 Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.963995 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.964004 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" event={"ID":"3107af8a-7bbd-4214-84e6-a3a18e79510f","Type":"ContainerDied","Data":"7be9305ee4de774cd6027ea43532a5c3b6b920732b57a3a527ee8c81f8260bbb"} Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.964098 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" event={"ID":"3107af8a-7bbd-4214-84e6-a3a18e79510f","Type":"ContainerDied","Data":"250afbceb4491af0b71e1680cfee8a18ebd2db8880f0ea2683b0332562116b45"} Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.964121 4763 scope.go:117] "RemoveContainer" containerID="7be9305ee4de774cd6027ea43532a5c3b6b920732b57a3a527ee8c81f8260bbb" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.966118 4763 generic.go:334] "Generic (PLEG): container finished" podID="f1904fac-b0bf-46bf-b137-8cae4630dc39" containerID="7d5409a4eb9aeba180cdcea87c6fa8d711fedc996ee925bdc8fcb33bd1d7680e" exitCode=0 Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.966167 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" event={"ID":"f1904fac-b0bf-46bf-b137-8cae4630dc39","Type":"ContainerDied","Data":"7d5409a4eb9aeba180cdcea87c6fa8d711fedc996ee925bdc8fcb33bd1d7680e"} Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.966190 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" event={"ID":"f1904fac-b0bf-46bf-b137-8cae4630dc39","Type":"ContainerDied","Data":"6d86f7ce32eb06c34ecceb3c5a5ea319381fd9969dad831555e155e547b2cd01"} Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.966232 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.970255 4763 generic.go:334] "Generic (PLEG): container finished" podID="275ea46d-7a78-4457-a5ba-7b3000170d0e" containerID="003c08e2c29d3c5b59c6a1fd94bb61e81bf58c9d936a5ff50b3a74fa00ee51de" exitCode=0 Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.970339 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" event={"ID":"275ea46d-7a78-4457-a5ba-7b3000170d0e","Type":"ContainerDied","Data":"003c08e2c29d3c5b59c6a1fd94bb61e81bf58c9d936a5ff50b3a74fa00ee51de"} Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.970360 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" event={"ID":"275ea46d-7a78-4457-a5ba-7b3000170d0e","Type":"ContainerDied","Data":"4c129b0efc8ba7e81700bd2c14706e7d50e9bb2c2ff1bf01d5aadf4cd55835b7"} Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.970480 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.974063 4763 generic.go:334] "Generic (PLEG): container finished" podID="5a85c02e-9d6e-4d11-be81-242bf4fee8c4" containerID="743edce5c6a64dd51dde28d7391bbad2b836f23024b81595df21ceb2f4bc226a" exitCode=0 Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.974098 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4m6qg" event={"ID":"5a85c02e-9d6e-4d11-be81-242bf4fee8c4","Type":"ContainerDied","Data":"743edce5c6a64dd51dde28d7391bbad2b836f23024b81595df21ceb2f4bc226a"} Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.974122 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4m6qg" event={"ID":"5a85c02e-9d6e-4d11-be81-242bf4fee8c4","Type":"ContainerDied","Data":"9c7dca8f63ce2a8f4eb26e014c54162f55c4578fef6f425b844a6c85dc4561db"} Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.974203 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4m6qg" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.979711 4763 scope.go:117] "RemoveContainer" containerID="7be9305ee4de774cd6027ea43532a5c3b6b920732b57a3a527ee8c81f8260bbb" Jan 31 14:58:26 crc kubenswrapper[4763]: E0131 14:58:26.983187 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7be9305ee4de774cd6027ea43532a5c3b6b920732b57a3a527ee8c81f8260bbb\": container with ID starting with 7be9305ee4de774cd6027ea43532a5c3b6b920732b57a3a527ee8c81f8260bbb not found: ID does not exist" containerID="7be9305ee4de774cd6027ea43532a5c3b6b920732b57a3a527ee8c81f8260bbb" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.983243 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7be9305ee4de774cd6027ea43532a5c3b6b920732b57a3a527ee8c81f8260bbb"} err="failed to get container status \"7be9305ee4de774cd6027ea43532a5c3b6b920732b57a3a527ee8c81f8260bbb\": rpc error: code = NotFound desc = could not find container \"7be9305ee4de774cd6027ea43532a5c3b6b920732b57a3a527ee8c81f8260bbb\": container with ID starting with 7be9305ee4de774cd6027ea43532a5c3b6b920732b57a3a527ee8c81f8260bbb not found: ID does not exist" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.983278 4763 scope.go:117] "RemoveContainer" containerID="7d5409a4eb9aeba180cdcea87c6fa8d711fedc996ee925bdc8fcb33bd1d7680e" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.995458 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp7h7\" (UniqueName: \"kubernetes.io/projected/f1904fac-b0bf-46bf-b137-8cae4630dc39-kube-api-access-sp7h7\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.995575 4763 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f1904fac-b0bf-46bf-b137-8cae4630dc39-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.995585 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1904fac-b0bf-46bf-b137-8cae4630dc39-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.995596 4763 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1904fac-b0bf-46bf-b137-8cae4630dc39-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.995604 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1904fac-b0bf-46bf-b137-8cae4630dc39-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.001817 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd"] Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.006165 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd"] Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.007867 4763 scope.go:117] "RemoveContainer" containerID="7d5409a4eb9aeba180cdcea87c6fa8d711fedc996ee925bdc8fcb33bd1d7680e" Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.008494 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d5409a4eb9aeba180cdcea87c6fa8d711fedc996ee925bdc8fcb33bd1d7680e\": container with ID starting with 7d5409a4eb9aeba180cdcea87c6fa8d711fedc996ee925bdc8fcb33bd1d7680e not found: ID does not exist" containerID="7d5409a4eb9aeba180cdcea87c6fa8d711fedc996ee925bdc8fcb33bd1d7680e" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.008526 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d5409a4eb9aeba180cdcea87c6fa8d711fedc996ee925bdc8fcb33bd1d7680e"} err="failed to get container status \"7d5409a4eb9aeba180cdcea87c6fa8d711fedc996ee925bdc8fcb33bd1d7680e\": rpc error: code = NotFound desc = could not find container \"7d5409a4eb9aeba180cdcea87c6fa8d711fedc996ee925bdc8fcb33bd1d7680e\": container with ID starting with 7d5409a4eb9aeba180cdcea87c6fa8d711fedc996ee925bdc8fcb33bd1d7680e not found: ID does not exist" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.008583 4763 scope.go:117] "RemoveContainer" containerID="003c08e2c29d3c5b59c6a1fd94bb61e81bf58c9d936a5ff50b3a74fa00ee51de" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.019509 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8pcvn"] Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.019546 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8pcvn"] Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.023122 4763 scope.go:117] "RemoveContainer" containerID="003c08e2c29d3c5b59c6a1fd94bb61e81bf58c9d936a5ff50b3a74fa00ee51de" Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.023519 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"003c08e2c29d3c5b59c6a1fd94bb61e81bf58c9d936a5ff50b3a74fa00ee51de\": container with ID starting with 003c08e2c29d3c5b59c6a1fd94bb61e81bf58c9d936a5ff50b3a74fa00ee51de not found: ID does not exist" containerID="003c08e2c29d3c5b59c6a1fd94bb61e81bf58c9d936a5ff50b3a74fa00ee51de" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.023562 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"003c08e2c29d3c5b59c6a1fd94bb61e81bf58c9d936a5ff50b3a74fa00ee51de"} err="failed to get container status \"003c08e2c29d3c5b59c6a1fd94bb61e81bf58c9d936a5ff50b3a74fa00ee51de\": rpc error: code = NotFound desc = could not find container \"003c08e2c29d3c5b59c6a1fd94bb61e81bf58c9d936a5ff50b3a74fa00ee51de\": container with ID starting with 003c08e2c29d3c5b59c6a1fd94bb61e81bf58c9d936a5ff50b3a74fa00ee51de not found: ID does not exist" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.023584 4763 scope.go:117] "RemoveContainer" containerID="743edce5c6a64dd51dde28d7391bbad2b836f23024b81595df21ceb2f4bc226a" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.026362 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76646487c9-96gk6"] Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.028733 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-76646487c9-96gk6"] Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.034535 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4m6qg"] Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.035713 4763 scope.go:117] "RemoveContainer" containerID="52a48be71e213b158459bde46ac394986ffd4e69dafbdbf7529020077d9cafb9" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.036992 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4m6qg"] Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.048208 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="275ea46d-7a78-4457-a5ba-7b3000170d0e" path="/var/lib/kubelet/pods/275ea46d-7a78-4457-a5ba-7b3000170d0e/volumes" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.048727 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3107af8a-7bbd-4214-84e6-a3a18e79510f" path="/var/lib/kubelet/pods/3107af8a-7bbd-4214-84e6-a3a18e79510f/volumes" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.049195 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a85c02e-9d6e-4d11-be81-242bf4fee8c4" path="/var/lib/kubelet/pods/5a85c02e-9d6e-4d11-be81-242bf4fee8c4/volumes" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.050265 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1904fac-b0bf-46bf-b137-8cae4630dc39" path="/var/lib/kubelet/pods/f1904fac-b0bf-46bf-b137-8cae4630dc39/volumes" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.052870 4763 scope.go:117] "RemoveContainer" containerID="020b220c679a66fe9c3c2990addb68ddfb9cb55888df25b0768fff8146c738ba" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.064666 4763 scope.go:117] "RemoveContainer" containerID="743edce5c6a64dd51dde28d7391bbad2b836f23024b81595df21ceb2f4bc226a" Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.065677 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"743edce5c6a64dd51dde28d7391bbad2b836f23024b81595df21ceb2f4bc226a\": container with ID starting with 743edce5c6a64dd51dde28d7391bbad2b836f23024b81595df21ceb2f4bc226a not found: ID does not exist" containerID="743edce5c6a64dd51dde28d7391bbad2b836f23024b81595df21ceb2f4bc226a" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.065716 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"743edce5c6a64dd51dde28d7391bbad2b836f23024b81595df21ceb2f4bc226a"} err="failed to get container status \"743edce5c6a64dd51dde28d7391bbad2b836f23024b81595df21ceb2f4bc226a\": rpc error: code = NotFound desc = could not find container \"743edce5c6a64dd51dde28d7391bbad2b836f23024b81595df21ceb2f4bc226a\": container with ID starting with 743edce5c6a64dd51dde28d7391bbad2b836f23024b81595df21ceb2f4bc226a not found: ID does not exist" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.065738 4763 scope.go:117] "RemoveContainer" containerID="52a48be71e213b158459bde46ac394986ffd4e69dafbdbf7529020077d9cafb9" Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.066028 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52a48be71e213b158459bde46ac394986ffd4e69dafbdbf7529020077d9cafb9\": container with ID starting with 52a48be71e213b158459bde46ac394986ffd4e69dafbdbf7529020077d9cafb9 not found: ID does not exist" containerID="52a48be71e213b158459bde46ac394986ffd4e69dafbdbf7529020077d9cafb9" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.066062 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52a48be71e213b158459bde46ac394986ffd4e69dafbdbf7529020077d9cafb9"} err="failed to get container status \"52a48be71e213b158459bde46ac394986ffd4e69dafbdbf7529020077d9cafb9\": rpc error: code = NotFound desc = could not find container \"52a48be71e213b158459bde46ac394986ffd4e69dafbdbf7529020077d9cafb9\": container with ID starting with 52a48be71e213b158459bde46ac394986ffd4e69dafbdbf7529020077d9cafb9 not found: ID does not exist" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.066085 4763 scope.go:117] "RemoveContainer" containerID="020b220c679a66fe9c3c2990addb68ddfb9cb55888df25b0768fff8146c738ba" Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.066325 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"020b220c679a66fe9c3c2990addb68ddfb9cb55888df25b0768fff8146c738ba\": container with ID starting with 020b220c679a66fe9c3c2990addb68ddfb9cb55888df25b0768fff8146c738ba not found: ID does not exist" containerID="020b220c679a66fe9c3c2990addb68ddfb9cb55888df25b0768fff8146c738ba" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.066345 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"020b220c679a66fe9c3c2990addb68ddfb9cb55888df25b0768fff8146c738ba"} err="failed to get container status \"020b220c679a66fe9c3c2990addb68ddfb9cb55888df25b0768fff8146c738ba\": rpc error: code = NotFound desc = could not find container \"020b220c679a66fe9c3c2990addb68ddfb9cb55888df25b0768fff8146c738ba\": container with ID starting with 020b220c679a66fe9c3c2990addb68ddfb9cb55888df25b0768fff8146c738ba not found: ID does not exist" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.257905 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-79887f45c6-xhvgp"] Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.258208 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="463b0d45-1b3b-46a1-afbd-650fa065b38f" containerName="extract-utilities" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258226 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="463b0d45-1b3b-46a1-afbd-650fa065b38f" containerName="extract-utilities" Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.258242 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a85c02e-9d6e-4d11-be81-242bf4fee8c4" containerName="extract-utilities" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258249 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a85c02e-9d6e-4d11-be81-242bf4fee8c4" containerName="extract-utilities" Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.258260 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="463b0d45-1b3b-46a1-afbd-650fa065b38f" containerName="registry-server" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258271 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="463b0d45-1b3b-46a1-afbd-650fa065b38f" containerName="registry-server" Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.258284 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="463b0d45-1b3b-46a1-afbd-650fa065b38f" containerName="extract-content" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258291 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="463b0d45-1b3b-46a1-afbd-650fa065b38f" containerName="extract-content" Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.258301 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1904fac-b0bf-46bf-b137-8cae4630dc39" containerName="controller-manager" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258307 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1904fac-b0bf-46bf-b137-8cae4630dc39" containerName="controller-manager" Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.258317 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e24d612-62ed-4bd5-8e07-889710d16851" containerName="registry-server" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258325 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e24d612-62ed-4bd5-8e07-889710d16851" containerName="registry-server" Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.258337 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a85c02e-9d6e-4d11-be81-242bf4fee8c4" containerName="registry-server" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258345 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a85c02e-9d6e-4d11-be81-242bf4fee8c4" containerName="registry-server" Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.258355 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="275ea46d-7a78-4457-a5ba-7b3000170d0e" containerName="oauth-openshift" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258363 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="275ea46d-7a78-4457-a5ba-7b3000170d0e" containerName="oauth-openshift" Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.258374 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e24d612-62ed-4bd5-8e07-889710d16851" containerName="extract-utilities" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258381 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e24d612-62ed-4bd5-8e07-889710d16851" containerName="extract-utilities" Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.258391 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa" containerName="registry-server" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258400 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa" containerName="registry-server" Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.258414 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3107af8a-7bbd-4214-84e6-a3a18e79510f" containerName="route-controller-manager" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258422 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="3107af8a-7bbd-4214-84e6-a3a18e79510f" containerName="route-controller-manager" Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.258430 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa" containerName="extract-content" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258437 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa" containerName="extract-content" Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.258452 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa" containerName="extract-utilities" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258459 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa" containerName="extract-utilities" Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.258468 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e24d612-62ed-4bd5-8e07-889710d16851" containerName="extract-content" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258476 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e24d612-62ed-4bd5-8e07-889710d16851" containerName="extract-content" Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.258484 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a85c02e-9d6e-4d11-be81-242bf4fee8c4" containerName="extract-content" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258510 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a85c02e-9d6e-4d11-be81-242bf4fee8c4" containerName="extract-content" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258626 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a85c02e-9d6e-4d11-be81-242bf4fee8c4" containerName="registry-server" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258640 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="463b0d45-1b3b-46a1-afbd-650fa065b38f" containerName="registry-server" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258725 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="275ea46d-7a78-4457-a5ba-7b3000170d0e" containerName="oauth-openshift" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258738 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="3107af8a-7bbd-4214-84e6-a3a18e79510f" containerName="route-controller-manager" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258747 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1904fac-b0bf-46bf-b137-8cae4630dc39" containerName="controller-manager" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258756 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa" containerName="registry-server" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258767 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e24d612-62ed-4bd5-8e07-889710d16851" containerName="registry-server" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.259214 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.262439 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.265585 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.265634 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.265831 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.266291 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.266307 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.271509 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5"] Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.272579 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.276254 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.277040 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.277479 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.277871 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.277999 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.278174 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.289346 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-79887f45c6-xhvgp"] Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.302561 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/380e01ec-794a-4356-995c-ef1113b1b126-proxy-ca-bundles\") pod \"controller-manager-79887f45c6-xhvgp\" (UID: \"380e01ec-794a-4356-995c-ef1113b1b126\") " pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.302808 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/380e01ec-794a-4356-995c-ef1113b1b126-serving-cert\") pod \"controller-manager-79887f45c6-xhvgp\" (UID: \"380e01ec-794a-4356-995c-ef1113b1b126\") " pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.302976 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/380e01ec-794a-4356-995c-ef1113b1b126-config\") pod \"controller-manager-79887f45c6-xhvgp\" (UID: \"380e01ec-794a-4356-995c-ef1113b1b126\") " pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.309887 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.310964 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/380e01ec-794a-4356-995c-ef1113b1b126-client-ca\") pod \"controller-manager-79887f45c6-xhvgp\" (UID: \"380e01ec-794a-4356-995c-ef1113b1b126\") " pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.311128 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbp5b\" (UniqueName: \"kubernetes.io/projected/380e01ec-794a-4356-995c-ef1113b1b126-kube-api-access-sbp5b\") pod \"controller-manager-79887f45c6-xhvgp\" (UID: \"380e01ec-794a-4356-995c-ef1113b1b126\") " pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.315411 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5"] Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.413644 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p2q2\" (UniqueName: \"kubernetes.io/projected/45e1613c-b869-484e-a000-da4460283966-kube-api-access-2p2q2\") pod \"route-controller-manager-8d744b964-dczt5\" (UID: \"45e1613c-b869-484e-a000-da4460283966\") " pod="openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.413843 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/380e01ec-794a-4356-995c-ef1113b1b126-client-ca\") pod \"controller-manager-79887f45c6-xhvgp\" (UID: \"380e01ec-794a-4356-995c-ef1113b1b126\") " pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.413928 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbp5b\" (UniqueName: \"kubernetes.io/projected/380e01ec-794a-4356-995c-ef1113b1b126-kube-api-access-sbp5b\") pod \"controller-manager-79887f45c6-xhvgp\" (UID: \"380e01ec-794a-4356-995c-ef1113b1b126\") " pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.414034 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45e1613c-b869-484e-a000-da4460283966-serving-cert\") pod \"route-controller-manager-8d744b964-dczt5\" (UID: \"45e1613c-b869-484e-a000-da4460283966\") " pod="openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.414113 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/380e01ec-794a-4356-995c-ef1113b1b126-proxy-ca-bundles\") pod \"controller-manager-79887f45c6-xhvgp\" (UID: \"380e01ec-794a-4356-995c-ef1113b1b126\") " pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.414205 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/380e01ec-794a-4356-995c-ef1113b1b126-serving-cert\") pod \"controller-manager-79887f45c6-xhvgp\" (UID: \"380e01ec-794a-4356-995c-ef1113b1b126\") " pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.414279 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/380e01ec-794a-4356-995c-ef1113b1b126-config\") pod \"controller-manager-79887f45c6-xhvgp\" (UID: \"380e01ec-794a-4356-995c-ef1113b1b126\") " pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.414342 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45e1613c-b869-484e-a000-da4460283966-config\") pod \"route-controller-manager-8d744b964-dczt5\" (UID: \"45e1613c-b869-484e-a000-da4460283966\") " pod="openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.414380 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45e1613c-b869-484e-a000-da4460283966-client-ca\") pod \"route-controller-manager-8d744b964-dczt5\" (UID: \"45e1613c-b869-484e-a000-da4460283966\") " pod="openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.417377 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/380e01ec-794a-4356-995c-ef1113b1b126-proxy-ca-bundles\") pod \"controller-manager-79887f45c6-xhvgp\" (UID: \"380e01ec-794a-4356-995c-ef1113b1b126\") " pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.418742 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/380e01ec-794a-4356-995c-ef1113b1b126-config\") pod \"controller-manager-79887f45c6-xhvgp\" (UID: \"380e01ec-794a-4356-995c-ef1113b1b126\") " pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.419410 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/380e01ec-794a-4356-995c-ef1113b1b126-client-ca\") pod \"controller-manager-79887f45c6-xhvgp\" (UID: \"380e01ec-794a-4356-995c-ef1113b1b126\") " pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.422785 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/380e01ec-794a-4356-995c-ef1113b1b126-serving-cert\") pod \"controller-manager-79887f45c6-xhvgp\" (UID: \"380e01ec-794a-4356-995c-ef1113b1b126\") " pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.447201 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbp5b\" (UniqueName: \"kubernetes.io/projected/380e01ec-794a-4356-995c-ef1113b1b126-kube-api-access-sbp5b\") pod \"controller-manager-79887f45c6-xhvgp\" (UID: \"380e01ec-794a-4356-995c-ef1113b1b126\") " pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.516181 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45e1613c-b869-484e-a000-da4460283966-config\") pod \"route-controller-manager-8d744b964-dczt5\" (UID: \"45e1613c-b869-484e-a000-da4460283966\") " pod="openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.516288 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45e1613c-b869-484e-a000-da4460283966-client-ca\") pod \"route-controller-manager-8d744b964-dczt5\" (UID: \"45e1613c-b869-484e-a000-da4460283966\") " pod="openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.516327 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p2q2\" (UniqueName: \"kubernetes.io/projected/45e1613c-b869-484e-a000-da4460283966-kube-api-access-2p2q2\") pod \"route-controller-manager-8d744b964-dczt5\" (UID: \"45e1613c-b869-484e-a000-da4460283966\") " pod="openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.516425 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45e1613c-b869-484e-a000-da4460283966-serving-cert\") pod \"route-controller-manager-8d744b964-dczt5\" (UID: \"45e1613c-b869-484e-a000-da4460283966\") " pod="openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.517439 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45e1613c-b869-484e-a000-da4460283966-config\") pod \"route-controller-manager-8d744b964-dczt5\" (UID: \"45e1613c-b869-484e-a000-da4460283966\") " pod="openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.517608 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45e1613c-b869-484e-a000-da4460283966-client-ca\") pod \"route-controller-manager-8d744b964-dczt5\" (UID: \"45e1613c-b869-484e-a000-da4460283966\") " pod="openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.523331 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45e1613c-b869-484e-a000-da4460283966-serving-cert\") pod \"route-controller-manager-8d744b964-dczt5\" (UID: \"45e1613c-b869-484e-a000-da4460283966\") " pod="openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.545307 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p2q2\" (UniqueName: \"kubernetes.io/projected/45e1613c-b869-484e-a000-da4460283966-kube-api-access-2p2q2\") pod \"route-controller-manager-8d744b964-dczt5\" (UID: \"45e1613c-b869-484e-a000-da4460283966\") " pod="openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.616544 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.633347 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.062618 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5"] Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.104156 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-79887f45c6-xhvgp"] Jan 31 14:58:28 crc kubenswrapper[4763]: W0131 14:58:28.116738 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod380e01ec_794a_4356_995c_ef1113b1b126.slice/crio-d08f359c5b02f449c344da21761eb32c1b0b871bc4c01ea0f73859b2fbd0742f WatchSource:0}: Error finding container d08f359c5b02f449c344da21761eb32c1b0b871bc4c01ea0f73859b2fbd0742f: Status 404 returned error can't find the container with id d08f359c5b02f449c344da21761eb32c1b0b871bc4c01ea0f73859b2fbd0742f Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.249480 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-65ff5df46b-wgctn"] Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.250123 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.252508 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.252684 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.252954 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.253079 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.253178 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.253344 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.253463 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.253569 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.253754 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.253921 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.258078 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.258539 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.261301 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-65ff5df46b-wgctn"] Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.263734 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.269103 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.273064 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.325875 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-cliconfig\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.326224 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7xmf\" (UniqueName: \"kubernetes.io/projected/d05a2994-becc-48cf-baf3-a17f479924ba-kube-api-access-k7xmf\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.326271 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d05a2994-becc-48cf-baf3-a17f479924ba-audit-policies\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.326290 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.326526 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d05a2994-becc-48cf-baf3-a17f479924ba-audit-dir\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.326572 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-serving-cert\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.326606 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-user-template-error\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.326624 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-session\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.326672 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.326725 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-router-certs\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.326897 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-service-ca\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.326975 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.327008 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-user-template-login\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.327033 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.428131 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-service-ca\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.428180 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.428208 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-user-template-login\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.428227 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.428253 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-cliconfig\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.428280 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7xmf\" (UniqueName: \"kubernetes.io/projected/d05a2994-becc-48cf-baf3-a17f479924ba-kube-api-access-k7xmf\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.428301 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d05a2994-becc-48cf-baf3-a17f479924ba-audit-policies\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.428318 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.428346 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d05a2994-becc-48cf-baf3-a17f479924ba-audit-dir\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.428360 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-serving-cert\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.428379 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-user-template-error\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.428410 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-session\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.428427 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.428446 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-router-certs\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.429250 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d05a2994-becc-48cf-baf3-a17f479924ba-audit-policies\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.429305 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d05a2994-becc-48cf-baf3-a17f479924ba-audit-dir\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.429370 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-cliconfig\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.429583 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.429931 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-service-ca\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.438313 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.438526 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.438739 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-user-template-error\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.439080 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-user-template-login\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.439309 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-router-certs\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.439566 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-serving-cert\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.440280 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-session\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.440773 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.447997 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7xmf\" (UniqueName: \"kubernetes.io/projected/d05a2994-becc-48cf-baf3-a17f479924ba-kube-api-access-k7xmf\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.600948 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.992740 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" event={"ID":"380e01ec-794a-4356-995c-ef1113b1b126","Type":"ContainerStarted","Data":"47ae2119b32406b5a4bc9ed6f6384c6eb429759a031f29bef87cdc7554dc4ff3"} Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.993129 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.993149 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" event={"ID":"380e01ec-794a-4356-995c-ef1113b1b126","Type":"ContainerStarted","Data":"d08f359c5b02f449c344da21761eb32c1b0b871bc4c01ea0f73859b2fbd0742f"} Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.994369 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5" event={"ID":"45e1613c-b869-484e-a000-da4460283966","Type":"ContainerStarted","Data":"372ab3b230cad3e5b4feed5027ebd56e62177302f1d6d673c6694d49f38b2ab1"} Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.994395 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5" event={"ID":"45e1613c-b869-484e-a000-da4460283966","Type":"ContainerStarted","Data":"4b774e08c3554b08f112b7a609047ea5b045fe00b717ed824cfcaeffd79d2bf1"} Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.994847 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.998759 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5" Jan 31 14:58:29 crc kubenswrapper[4763]: I0131 14:58:29.001004 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" Jan 31 14:58:29 crc kubenswrapper[4763]: I0131 14:58:29.017836 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" podStartSLOduration=4.017817075 podStartE2EDuration="4.017817075s" podCreationTimestamp="2026-01-31 14:58:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:58:29.015453428 +0000 UTC m=+228.770191751" watchObservedRunningTime="2026-01-31 14:58:29.017817075 +0000 UTC m=+228.772555368" Jan 31 14:58:29 crc kubenswrapper[4763]: I0131 14:58:29.057348 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5" podStartSLOduration=4.057330435 podStartE2EDuration="4.057330435s" podCreationTimestamp="2026-01-31 14:58:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:58:29.054233258 +0000 UTC m=+228.808971571" watchObservedRunningTime="2026-01-31 14:58:29.057330435 +0000 UTC m=+228.812068728" Jan 31 14:58:29 crc kubenswrapper[4763]: I0131 14:58:29.058862 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-65ff5df46b-wgctn"] Jan 31 14:58:30 crc kubenswrapper[4763]: I0131 14:58:30.002282 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" event={"ID":"d05a2994-becc-48cf-baf3-a17f479924ba","Type":"ContainerStarted","Data":"2209620677ee635e4efed085b81c777d11d8d97617cfed685f97d1355a8bb630"} Jan 31 14:58:30 crc kubenswrapper[4763]: I0131 14:58:30.002361 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" event={"ID":"d05a2994-becc-48cf-baf3-a17f479924ba","Type":"ContainerStarted","Data":"92471a19942769780ce31578273efb7d154e1e41a1156d1e8c2cd3db8e6802a7"} Jan 31 14:58:31 crc kubenswrapper[4763]: I0131 14:58:31.037202 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" podStartSLOduration=31.037179432 podStartE2EDuration="31.037179432s" podCreationTimestamp="2026-01-31 14:58:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:58:31.036002258 +0000 UTC m=+230.790740551" watchObservedRunningTime="2026-01-31 14:58:31.037179432 +0000 UTC m=+230.791917735" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.457806 4763 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.458810 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c" gracePeriod=15 Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.458855 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e" gracePeriod=15 Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.458954 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b" gracePeriod=15 Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.458900 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b" gracePeriod=15 Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.458931 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12" gracePeriod=15 Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.461719 4763 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 14:58:34 crc kubenswrapper[4763]: E0131 14:58:34.461941 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.461959 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 14:58:34 crc kubenswrapper[4763]: E0131 14:58:34.461974 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.461981 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 14:58:34 crc kubenswrapper[4763]: E0131 14:58:34.461993 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.462000 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 31 14:58:34 crc kubenswrapper[4763]: E0131 14:58:34.462009 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.462016 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 14:58:34 crc kubenswrapper[4763]: E0131 14:58:34.462029 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.462036 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 14:58:34 crc kubenswrapper[4763]: E0131 14:58:34.462048 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.462054 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.462181 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.462195 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.462206 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.462218 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.462228 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.463610 4763 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.464564 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.470892 4763 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.511757 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.606719 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.606762 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.606785 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.606804 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.606837 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.606864 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.606911 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.607005 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.707906 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.707964 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.707993 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.708010 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.708046 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.708081 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.708105 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.708082 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.708163 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.708175 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.708200 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.708122 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.708160 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.708218 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.708194 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.708038 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.806486 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: W0131 14:58:34.835735 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-f3f0047e9cf6cf792d1c255b8b4cbb944c0ff4f1a8d24ed5857f9a3c030dac56 WatchSource:0}: Error finding container f3f0047e9cf6cf792d1c255b8b4cbb944c0ff4f1a8d24ed5857f9a3c030dac56: Status 404 returned error can't find the container with id f3f0047e9cf6cf792d1c255b8b4cbb944c0ff4f1a8d24ed5857f9a3c030dac56 Jan 31 14:58:34 crc kubenswrapper[4763]: E0131 14:58:34.840097 4763 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.177:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188fd8c3fc054bd2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 14:58:34.83832213 +0000 UTC m=+234.593060463,LastTimestamp:2026-01-31 14:58:34.83832213 +0000 UTC m=+234.593060463,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 14:58:35 crc kubenswrapper[4763]: I0131 14:58:35.045447 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 14:58:35 crc kubenswrapper[4763]: I0131 14:58:35.047282 4763 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b" exitCode=0 Jan 31 14:58:35 crc kubenswrapper[4763]: I0131 14:58:35.047332 4763 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12" exitCode=0 Jan 31 14:58:35 crc kubenswrapper[4763]: I0131 14:58:35.047348 4763 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e" exitCode=0 Jan 31 14:58:35 crc kubenswrapper[4763]: I0131 14:58:35.047361 4763 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b" exitCode=2 Jan 31 14:58:35 crc kubenswrapper[4763]: I0131 14:58:35.053016 4763 generic.go:334] "Generic (PLEG): container finished" podID="e242425d-c262-45a8-b933-a84abec6740e" containerID="441b596e02022ae8016874e04bb2e0f024181a6d82a5065a1b6c5f7422bd492e" exitCode=0 Jan 31 14:58:35 crc kubenswrapper[4763]: I0131 14:58:35.053295 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"f3f0047e9cf6cf792d1c255b8b4cbb944c0ff4f1a8d24ed5857f9a3c030dac56"} Jan 31 14:58:35 crc kubenswrapper[4763]: I0131 14:58:35.053361 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e242425d-c262-45a8-b933-a84abec6740e","Type":"ContainerDied","Data":"441b596e02022ae8016874e04bb2e0f024181a6d82a5065a1b6c5f7422bd492e"} Jan 31 14:58:35 crc kubenswrapper[4763]: I0131 14:58:35.054047 4763 status_manager.go:851] "Failed to get status for pod" podUID="e242425d-c262-45a8-b933-a84abec6740e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:35 crc kubenswrapper[4763]: I0131 14:58:35.054654 4763 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.061533 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"7c5454adb66dbfe52e542cda6188d149bbe0998a17793a877cd0c2d065f9ddf8"} Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.062348 4763 status_manager.go:851] "Failed to get status for pod" podUID="e242425d-c262-45a8-b933-a84abec6740e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.062763 4763 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.517466 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.518565 4763 status_manager.go:851] "Failed to get status for pod" podUID="e242425d-c262-45a8-b933-a84abec6740e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.518980 4763 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.643092 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e242425d-c262-45a8-b933-a84abec6740e-var-lock\") pod \"e242425d-c262-45a8-b933-a84abec6740e\" (UID: \"e242425d-c262-45a8-b933-a84abec6740e\") " Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.643194 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e242425d-c262-45a8-b933-a84abec6740e-kube-api-access\") pod \"e242425d-c262-45a8-b933-a84abec6740e\" (UID: \"e242425d-c262-45a8-b933-a84abec6740e\") " Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.643239 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e242425d-c262-45a8-b933-a84abec6740e-kubelet-dir\") pod \"e242425d-c262-45a8-b933-a84abec6740e\" (UID: \"e242425d-c262-45a8-b933-a84abec6740e\") " Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.643240 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e242425d-c262-45a8-b933-a84abec6740e-var-lock" (OuterVolumeSpecName: "var-lock") pod "e242425d-c262-45a8-b933-a84abec6740e" (UID: "e242425d-c262-45a8-b933-a84abec6740e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.643322 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e242425d-c262-45a8-b933-a84abec6740e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e242425d-c262-45a8-b933-a84abec6740e" (UID: "e242425d-c262-45a8-b933-a84abec6740e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.643533 4763 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e242425d-c262-45a8-b933-a84abec6740e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.643552 4763 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e242425d-c262-45a8-b933-a84abec6740e-var-lock\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.650361 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e242425d-c262-45a8-b933-a84abec6740e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e242425d-c262-45a8-b933-a84abec6740e" (UID: "e242425d-c262-45a8-b933-a84abec6740e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.744457 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e242425d-c262-45a8-b933-a84abec6740e-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.836792 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.837715 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.838260 4763 status_manager.go:851] "Failed to get status for pod" podUID="e242425d-c262-45a8-b933-a84abec6740e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.838924 4763 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.839520 4763 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.946653 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.946760 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.946857 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.946849 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.946899 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.946930 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.947421 4763 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.947455 4763 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.947481 4763 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:36 crc kubenswrapper[4763]: E0131 14:58:36.994907 4763 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.177:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188fd8c3fc054bd2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 14:58:34.83832213 +0000 UTC m=+234.593060463,LastTimestamp:2026-01-31 14:58:34.83832213 +0000 UTC m=+234.593060463,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.053745 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.072122 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e242425d-c262-45a8-b933-a84abec6740e","Type":"ContainerDied","Data":"dcc37e590bdd58dcc1ac284fc11c47be208afeacbdd9d483c5b8907e05e7ba95"} Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.072161 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcc37e590bdd58dcc1ac284fc11c47be208afeacbdd9d483c5b8907e05e7ba95" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.072193 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.077997 4763 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.078688 4763 status_manager.go:851] "Failed to get status for pod" podUID="e242425d-c262-45a8-b933-a84abec6740e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.078901 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.080305 4763 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c" exitCode=0 Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.080449 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.080469 4763 scope.go:117] "RemoveContainer" containerID="b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.081784 4763 status_manager.go:851] "Failed to get status for pod" podUID="e242425d-c262-45a8-b933-a84abec6740e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.082399 4763 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.082928 4763 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.084817 4763 status_manager.go:851] "Failed to get status for pod" podUID="e242425d-c262-45a8-b933-a84abec6740e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.085735 4763 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.087598 4763 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.106085 4763 scope.go:117] "RemoveContainer" containerID="02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.130420 4763 scope.go:117] "RemoveContainer" containerID="1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.151991 4763 scope.go:117] "RemoveContainer" containerID="5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.179316 4763 scope.go:117] "RemoveContainer" containerID="ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.202395 4763 scope.go:117] "RemoveContainer" containerID="89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.229041 4763 scope.go:117] "RemoveContainer" containerID="b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b" Jan 31 14:58:37 crc kubenswrapper[4763]: E0131 14:58:37.229659 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\": container with ID starting with b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b not found: ID does not exist" containerID="b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.229759 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b"} err="failed to get container status \"b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\": rpc error: code = NotFound desc = could not find container \"b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\": container with ID starting with b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b not found: ID does not exist" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.229809 4763 scope.go:117] "RemoveContainer" containerID="02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12" Jan 31 14:58:37 crc kubenswrapper[4763]: E0131 14:58:37.230431 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\": container with ID starting with 02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12 not found: ID does not exist" containerID="02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.230496 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12"} err="failed to get container status \"02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\": rpc error: code = NotFound desc = could not find container \"02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\": container with ID starting with 02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12 not found: ID does not exist" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.230528 4763 scope.go:117] "RemoveContainer" containerID="1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e" Jan 31 14:58:37 crc kubenswrapper[4763]: E0131 14:58:37.231199 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\": container with ID starting with 1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e not found: ID does not exist" containerID="1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.231263 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e"} err="failed to get container status \"1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\": rpc error: code = NotFound desc = could not find container \"1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\": container with ID starting with 1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e not found: ID does not exist" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.231298 4763 scope.go:117] "RemoveContainer" containerID="5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b" Jan 31 14:58:37 crc kubenswrapper[4763]: E0131 14:58:37.231773 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\": container with ID starting with 5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b not found: ID does not exist" containerID="5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.231832 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b"} err="failed to get container status \"5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\": rpc error: code = NotFound desc = could not find container \"5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\": container with ID starting with 5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b not found: ID does not exist" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.231859 4763 scope.go:117] "RemoveContainer" containerID="ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c" Jan 31 14:58:37 crc kubenswrapper[4763]: E0131 14:58:37.232332 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\": container with ID starting with ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c not found: ID does not exist" containerID="ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.232391 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c"} err="failed to get container status \"ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\": rpc error: code = NotFound desc = could not find container \"ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\": container with ID starting with ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c not found: ID does not exist" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.232433 4763 scope.go:117] "RemoveContainer" containerID="89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295" Jan 31 14:58:37 crc kubenswrapper[4763]: E0131 14:58:37.232985 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\": container with ID starting with 89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295 not found: ID does not exist" containerID="89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.233042 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295"} err="failed to get container status \"89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\": rpc error: code = NotFound desc = could not find container \"89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\": container with ID starting with 89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295 not found: ID does not exist" Jan 31 14:58:38 crc kubenswrapper[4763]: I0131 14:58:38.602272 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:38 crc kubenswrapper[4763]: I0131 14:58:38.610887 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:38 crc kubenswrapper[4763]: I0131 14:58:38.611541 4763 status_manager.go:851] "Failed to get status for pod" podUID="e242425d-c262-45a8-b933-a84abec6740e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:38 crc kubenswrapper[4763]: I0131 14:58:38.611967 4763 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:38 crc kubenswrapper[4763]: I0131 14:58:38.612325 4763 status_manager.go:851] "Failed to get status for pod" podUID="d05a2994-becc-48cf-baf3-a17f479924ba" pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-65ff5df46b-wgctn\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:38 crc kubenswrapper[4763]: I0131 14:58:38.612912 4763 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:40 crc kubenswrapper[4763]: E0131 14:58:40.785267 4763 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:40 crc kubenswrapper[4763]: E0131 14:58:40.785719 4763 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:40 crc kubenswrapper[4763]: E0131 14:58:40.786089 4763 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:40 crc kubenswrapper[4763]: E0131 14:58:40.786558 4763 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:40 crc kubenswrapper[4763]: E0131 14:58:40.787084 4763 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:40 crc kubenswrapper[4763]: I0131 14:58:40.787121 4763 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 31 14:58:40 crc kubenswrapper[4763]: E0131 14:58:40.787452 4763 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="200ms" Jan 31 14:58:40 crc kubenswrapper[4763]: E0131 14:58:40.988283 4763 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="400ms" Jan 31 14:58:41 crc kubenswrapper[4763]: I0131 14:58:41.044485 4763 status_manager.go:851] "Failed to get status for pod" podUID="d05a2994-becc-48cf-baf3-a17f479924ba" pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-65ff5df46b-wgctn\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:41 crc kubenswrapper[4763]: I0131 14:58:41.045315 4763 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:41 crc kubenswrapper[4763]: I0131 14:58:41.045616 4763 status_manager.go:851] "Failed to get status for pod" podUID="e242425d-c262-45a8-b933-a84abec6740e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:41 crc kubenswrapper[4763]: E0131 14:58:41.390141 4763 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="800ms" Jan 31 14:58:42 crc kubenswrapper[4763]: E0131 14:58:42.191796 4763 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="1.6s" Jan 31 14:58:43 crc kubenswrapper[4763]: E0131 14:58:43.792522 4763 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="3.2s" Jan 31 14:58:46 crc kubenswrapper[4763]: E0131 14:58:46.993546 4763 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="6.4s" Jan 31 14:58:46 crc kubenswrapper[4763]: E0131 14:58:46.996152 4763 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.177:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188fd8c3fc054bd2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 14:58:34.83832213 +0000 UTC m=+234.593060463,LastTimestamp:2026-01-31 14:58:34.83832213 +0000 UTC m=+234.593060463,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 14:58:49 crc kubenswrapper[4763]: I0131 14:58:49.152351 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 31 14:58:49 crc kubenswrapper[4763]: I0131 14:58:49.152442 4763 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f" exitCode=1 Jan 31 14:58:49 crc kubenswrapper[4763]: I0131 14:58:49.152506 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f"} Jan 31 14:58:49 crc kubenswrapper[4763]: I0131 14:58:49.153433 4763 scope.go:117] "RemoveContainer" containerID="775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f" Jan 31 14:58:49 crc kubenswrapper[4763]: I0131 14:58:49.153885 4763 status_manager.go:851] "Failed to get status for pod" podUID="e242425d-c262-45a8-b933-a84abec6740e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:49 crc kubenswrapper[4763]: I0131 14:58:49.155751 4763 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:49 crc kubenswrapper[4763]: I0131 14:58:49.156292 4763 status_manager.go:851] "Failed to get status for pod" podUID="d05a2994-becc-48cf-baf3-a17f479924ba" pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-65ff5df46b-wgctn\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:49 crc kubenswrapper[4763]: I0131 14:58:49.156646 4763 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:50 crc kubenswrapper[4763]: I0131 14:58:50.041836 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:50 crc kubenswrapper[4763]: I0131 14:58:50.043277 4763 status_manager.go:851] "Failed to get status for pod" podUID="e242425d-c262-45a8-b933-a84abec6740e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:50 crc kubenswrapper[4763]: I0131 14:58:50.043809 4763 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:50 crc kubenswrapper[4763]: I0131 14:58:50.044302 4763 status_manager.go:851] "Failed to get status for pod" podUID="d05a2994-becc-48cf-baf3-a17f479924ba" pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-65ff5df46b-wgctn\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:50 crc kubenswrapper[4763]: I0131 14:58:50.044967 4763 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:50 crc kubenswrapper[4763]: I0131 14:58:50.057179 4763 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="72708d98-90a0-4456-bdbb-6ccdf80bd45f" Jan 31 14:58:50 crc kubenswrapper[4763]: I0131 14:58:50.057233 4763 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="72708d98-90a0-4456-bdbb-6ccdf80bd45f" Jan 31 14:58:50 crc kubenswrapper[4763]: E0131 14:58:50.057827 4763 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:50 crc kubenswrapper[4763]: I0131 14:58:50.058614 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:50 crc kubenswrapper[4763]: W0131 14:58:50.077067 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-a02cd5ef656b6f07cc9a09971eea6eda7e1d2eb0ff5fdc2939425ca67aafbd5f WatchSource:0}: Error finding container a02cd5ef656b6f07cc9a09971eea6eda7e1d2eb0ff5fdc2939425ca67aafbd5f: Status 404 returned error can't find the container with id a02cd5ef656b6f07cc9a09971eea6eda7e1d2eb0ff5fdc2939425ca67aafbd5f Jan 31 14:58:50 crc kubenswrapper[4763]: I0131 14:58:50.164048 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 31 14:58:50 crc kubenswrapper[4763]: I0131 14:58:50.164440 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d2a870ccff7eb78880185ce2922012957b68a7b3e4028f2b3e48d2dcd1b9f481"} Jan 31 14:58:50 crc kubenswrapper[4763]: I0131 14:58:50.165866 4763 status_manager.go:851] "Failed to get status for pod" podUID="e242425d-c262-45a8-b933-a84abec6740e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:50 crc kubenswrapper[4763]: I0131 14:58:50.166414 4763 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:50 crc kubenswrapper[4763]: I0131 14:58:50.166977 4763 status_manager.go:851] "Failed to get status for pod" podUID="d05a2994-becc-48cf-baf3-a17f479924ba" pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-65ff5df46b-wgctn\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:50 crc kubenswrapper[4763]: I0131 14:58:50.167074 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a02cd5ef656b6f07cc9a09971eea6eda7e1d2eb0ff5fdc2939425ca67aafbd5f"} Jan 31 14:58:50 crc kubenswrapper[4763]: I0131 14:58:50.167621 4763 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:51 crc kubenswrapper[4763]: I0131 14:58:51.062765 4763 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:51 crc kubenswrapper[4763]: I0131 14:58:51.063546 4763 status_manager.go:851] "Failed to get status for pod" podUID="d05a2994-becc-48cf-baf3-a17f479924ba" pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-65ff5df46b-wgctn\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:51 crc kubenswrapper[4763]: I0131 14:58:51.064185 4763 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:51 crc kubenswrapper[4763]: I0131 14:58:51.064643 4763 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:51 crc kubenswrapper[4763]: I0131 14:58:51.065117 4763 status_manager.go:851] "Failed to get status for pod" podUID="e242425d-c262-45a8-b933-a84abec6740e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:51 crc kubenswrapper[4763]: I0131 14:58:51.181766 4763 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="07f6bff26f370a15049c5174828a86d8544bf96781bc64207289748f486b7ac8" exitCode=0 Jan 31 14:58:51 crc kubenswrapper[4763]: I0131 14:58:51.181842 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"07f6bff26f370a15049c5174828a86d8544bf96781bc64207289748f486b7ac8"} Jan 31 14:58:51 crc kubenswrapper[4763]: I0131 14:58:51.182486 4763 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="72708d98-90a0-4456-bdbb-6ccdf80bd45f" Jan 31 14:58:51 crc kubenswrapper[4763]: I0131 14:58:51.182543 4763 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="72708d98-90a0-4456-bdbb-6ccdf80bd45f" Jan 31 14:58:51 crc kubenswrapper[4763]: I0131 14:58:51.182820 4763 status_manager.go:851] "Failed to get status for pod" podUID="d05a2994-becc-48cf-baf3-a17f479924ba" pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-65ff5df46b-wgctn\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:51 crc kubenswrapper[4763]: I0131 14:58:51.183284 4763 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:51 crc kubenswrapper[4763]: E0131 14:58:51.183330 4763 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:51 crc kubenswrapper[4763]: I0131 14:58:51.183781 4763 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:51 crc kubenswrapper[4763]: I0131 14:58:51.184259 4763 status_manager.go:851] "Failed to get status for pod" podUID="e242425d-c262-45a8-b933-a84abec6740e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:51 crc kubenswrapper[4763]: I0131 14:58:51.184930 4763 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:52 crc kubenswrapper[4763]: I0131 14:58:52.192736 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"87dcd09f64d885c5b934a05e71ca5aa27fa58062ab5a54196206479f30add3e9"} Jan 31 14:58:52 crc kubenswrapper[4763]: I0131 14:58:52.193319 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8e13de8c254a91a8b6d3a77c97a8e0bf523677948ccfb4eaa7d89f4c22510004"} Jan 31 14:58:52 crc kubenswrapper[4763]: I0131 14:58:52.193333 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e568897626546bb8112884f3788ed3d71e6541529265ad71b5c66893c2db116d"} Jan 31 14:58:53 crc kubenswrapper[4763]: I0131 14:58:53.199486 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"11513eb82f391a20aebebd87396db7125c8521123ab258751f2b8d497f4e167f"} Jan 31 14:58:53 crc kubenswrapper[4763]: I0131 14:58:53.199526 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8a2a810430ac3145318caf57bfafd2acfe3ceb2117e75b0c71af39aa051b8df8"} Jan 31 14:58:53 crc kubenswrapper[4763]: I0131 14:58:53.199650 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:53 crc kubenswrapper[4763]: I0131 14:58:53.199750 4763 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="72708d98-90a0-4456-bdbb-6ccdf80bd45f" Jan 31 14:58:53 crc kubenswrapper[4763]: I0131 14:58:53.199776 4763 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="72708d98-90a0-4456-bdbb-6ccdf80bd45f" Jan 31 14:58:55 crc kubenswrapper[4763]: I0131 14:58:55.059354 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:55 crc kubenswrapper[4763]: I0131 14:58:55.059863 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:55 crc kubenswrapper[4763]: I0131 14:58:55.067888 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:55 crc kubenswrapper[4763]: I0131 14:58:55.394980 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:58:55 crc kubenswrapper[4763]: I0131 14:58:55.737795 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:58:55 crc kubenswrapper[4763]: I0131 14:58:55.742104 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:58:58 crc kubenswrapper[4763]: I0131 14:58:58.216281 4763 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:58 crc kubenswrapper[4763]: I0131 14:58:58.348578 4763 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ebd3e29f-bf35-46d2-9605-e8963646b845" Jan 31 14:58:59 crc kubenswrapper[4763]: I0131 14:58:59.236801 4763 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="72708d98-90a0-4456-bdbb-6ccdf80bd45f" Jan 31 14:58:59 crc kubenswrapper[4763]: I0131 14:58:59.236834 4763 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="72708d98-90a0-4456-bdbb-6ccdf80bd45f" Jan 31 14:58:59 crc kubenswrapper[4763]: I0131 14:58:59.244168 4763 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ebd3e29f-bf35-46d2-9605-e8963646b845" Jan 31 14:58:59 crc kubenswrapper[4763]: I0131 14:58:59.244764 4763 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://e568897626546bb8112884f3788ed3d71e6541529265ad71b5c66893c2db116d" Jan 31 14:58:59 crc kubenswrapper[4763]: I0131 14:58:59.244788 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:59:00 crc kubenswrapper[4763]: I0131 14:59:00.244404 4763 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="72708d98-90a0-4456-bdbb-6ccdf80bd45f" Jan 31 14:59:00 crc kubenswrapper[4763]: I0131 14:59:00.244448 4763 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="72708d98-90a0-4456-bdbb-6ccdf80bd45f" Jan 31 14:59:00 crc kubenswrapper[4763]: I0131 14:59:00.249781 4763 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ebd3e29f-bf35-46d2-9605-e8963646b845" Jan 31 14:59:05 crc kubenswrapper[4763]: I0131 14:59:05.400897 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:59:08 crc kubenswrapper[4763]: I0131 14:59:08.130229 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 31 14:59:08 crc kubenswrapper[4763]: I0131 14:59:08.260131 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 31 14:59:08 crc kubenswrapper[4763]: I0131 14:59:08.487943 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 31 14:59:08 crc kubenswrapper[4763]: I0131 14:59:08.622459 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 31 14:59:08 crc kubenswrapper[4763]: I0131 14:59:08.650580 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 31 14:59:09 crc kubenswrapper[4763]: I0131 14:59:09.126020 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 31 14:59:09 crc kubenswrapper[4763]: I0131 14:59:09.211075 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 31 14:59:09 crc kubenswrapper[4763]: I0131 14:59:09.231330 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 31 14:59:09 crc kubenswrapper[4763]: I0131 14:59:09.346590 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 31 14:59:09 crc kubenswrapper[4763]: I0131 14:59:09.353581 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 31 14:59:09 crc kubenswrapper[4763]: I0131 14:59:09.467012 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 31 14:59:09 crc kubenswrapper[4763]: I0131 14:59:09.474792 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 31 14:59:09 crc kubenswrapper[4763]: I0131 14:59:09.545511 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 31 14:59:09 crc kubenswrapper[4763]: I0131 14:59:09.705874 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 31 14:59:09 crc kubenswrapper[4763]: I0131 14:59:09.888067 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 14:59:09 crc kubenswrapper[4763]: I0131 14:59:09.975067 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 31 14:59:09 crc kubenswrapper[4763]: I0131 14:59:09.976848 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 31 14:59:10 crc kubenswrapper[4763]: I0131 14:59:10.021010 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 31 14:59:10 crc kubenswrapper[4763]: I0131 14:59:10.170527 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 31 14:59:10 crc kubenswrapper[4763]: I0131 14:59:10.315788 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 31 14:59:10 crc kubenswrapper[4763]: I0131 14:59:10.378165 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 31 14:59:10 crc kubenswrapper[4763]: I0131 14:59:10.481289 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 31 14:59:10 crc kubenswrapper[4763]: I0131 14:59:10.547552 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 31 14:59:10 crc kubenswrapper[4763]: I0131 14:59:10.597385 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 31 14:59:10 crc kubenswrapper[4763]: I0131 14:59:10.878078 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 31 14:59:10 crc kubenswrapper[4763]: I0131 14:59:10.977131 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 31 14:59:10 crc kubenswrapper[4763]: I0131 14:59:10.991791 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 14:59:11 crc kubenswrapper[4763]: I0131 14:59:11.057324 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 31 14:59:11 crc kubenswrapper[4763]: I0131 14:59:11.083626 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 31 14:59:11 crc kubenswrapper[4763]: I0131 14:59:11.200241 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 31 14:59:11 crc kubenswrapper[4763]: I0131 14:59:11.275038 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 31 14:59:11 crc kubenswrapper[4763]: I0131 14:59:11.397945 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 31 14:59:11 crc kubenswrapper[4763]: I0131 14:59:11.402315 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 31 14:59:11 crc kubenswrapper[4763]: I0131 14:59:11.407108 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 31 14:59:11 crc kubenswrapper[4763]: I0131 14:59:11.464837 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 31 14:59:11 crc kubenswrapper[4763]: I0131 14:59:11.547013 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 31 14:59:11 crc kubenswrapper[4763]: I0131 14:59:11.568879 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 14:59:11 crc kubenswrapper[4763]: I0131 14:59:11.592411 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 31 14:59:11 crc kubenswrapper[4763]: I0131 14:59:11.635011 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 31 14:59:11 crc kubenswrapper[4763]: I0131 14:59:11.692243 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 31 14:59:11 crc kubenswrapper[4763]: I0131 14:59:11.867260 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 31 14:59:11 crc kubenswrapper[4763]: I0131 14:59:11.880146 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 31 14:59:11 crc kubenswrapper[4763]: I0131 14:59:11.998939 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 31 14:59:12 crc kubenswrapper[4763]: I0131 14:59:12.032531 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 31 14:59:12 crc kubenswrapper[4763]: I0131 14:59:12.098229 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 31 14:59:12 crc kubenswrapper[4763]: I0131 14:59:12.101512 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 31 14:59:12 crc kubenswrapper[4763]: I0131 14:59:12.105033 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 31 14:59:12 crc kubenswrapper[4763]: I0131 14:59:12.135048 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 31 14:59:12 crc kubenswrapper[4763]: I0131 14:59:12.229533 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 31 14:59:12 crc kubenswrapper[4763]: I0131 14:59:12.286741 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 31 14:59:12 crc kubenswrapper[4763]: I0131 14:59:12.329973 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 31 14:59:12 crc kubenswrapper[4763]: I0131 14:59:12.439462 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 31 14:59:12 crc kubenswrapper[4763]: I0131 14:59:12.486517 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 31 14:59:12 crc kubenswrapper[4763]: I0131 14:59:12.505870 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 31 14:59:12 crc kubenswrapper[4763]: I0131 14:59:12.516486 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 14:59:12 crc kubenswrapper[4763]: I0131 14:59:12.609104 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 31 14:59:12 crc kubenswrapper[4763]: I0131 14:59:12.686979 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 31 14:59:12 crc kubenswrapper[4763]: I0131 14:59:12.698566 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 31 14:59:12 crc kubenswrapper[4763]: I0131 14:59:12.737393 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 31 14:59:12 crc kubenswrapper[4763]: I0131 14:59:12.864790 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 31 14:59:12 crc kubenswrapper[4763]: I0131 14:59:12.896434 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 31 14:59:12 crc kubenswrapper[4763]: I0131 14:59:12.984011 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 14:59:13 crc kubenswrapper[4763]: I0131 14:59:13.164447 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 31 14:59:13 crc kubenswrapper[4763]: I0131 14:59:13.197390 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 31 14:59:13 crc kubenswrapper[4763]: I0131 14:59:13.202546 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 14:59:13 crc kubenswrapper[4763]: I0131 14:59:13.202860 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 31 14:59:13 crc kubenswrapper[4763]: I0131 14:59:13.222331 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 31 14:59:13 crc kubenswrapper[4763]: I0131 14:59:13.403895 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 31 14:59:13 crc kubenswrapper[4763]: I0131 14:59:13.520164 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 31 14:59:13 crc kubenswrapper[4763]: I0131 14:59:13.544092 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 31 14:59:13 crc kubenswrapper[4763]: I0131 14:59:13.558280 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 14:59:13 crc kubenswrapper[4763]: I0131 14:59:13.569687 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 31 14:59:13 crc kubenswrapper[4763]: I0131 14:59:13.631830 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 31 14:59:13 crc kubenswrapper[4763]: I0131 14:59:13.667409 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 31 14:59:13 crc kubenswrapper[4763]: I0131 14:59:13.719728 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 14:59:13 crc kubenswrapper[4763]: I0131 14:59:13.746131 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 14:59:13 crc kubenswrapper[4763]: I0131 14:59:13.758725 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 14:59:13 crc kubenswrapper[4763]: I0131 14:59:13.862996 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 31 14:59:13 crc kubenswrapper[4763]: I0131 14:59:13.871017 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 31 14:59:13 crc kubenswrapper[4763]: I0131 14:59:13.902955 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.510306 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.510326 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.510550 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.517894 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.518217 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.518368 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.518788 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.519017 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.519817 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.523818 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.524184 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.524441 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.524860 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.525028 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.525155 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.537670 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.554085 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.628605 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.762216 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.776684 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.804141 4763 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.844686 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.933559 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.936602 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 31 14:59:15 crc kubenswrapper[4763]: I0131 14:59:15.083140 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 31 14:59:15 crc kubenswrapper[4763]: I0131 14:59:15.178808 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 31 14:59:15 crc kubenswrapper[4763]: I0131 14:59:15.355397 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 31 14:59:15 crc kubenswrapper[4763]: I0131 14:59:15.521725 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 14:59:15 crc kubenswrapper[4763]: I0131 14:59:15.578914 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 31 14:59:15 crc kubenswrapper[4763]: I0131 14:59:15.644166 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 31 14:59:15 crc kubenswrapper[4763]: I0131 14:59:15.739065 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 31 14:59:15 crc kubenswrapper[4763]: I0131 14:59:15.768677 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 31 14:59:15 crc kubenswrapper[4763]: I0131 14:59:15.932645 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 31 14:59:15 crc kubenswrapper[4763]: I0131 14:59:15.963972 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 31 14:59:16 crc kubenswrapper[4763]: I0131 14:59:16.035751 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 31 14:59:16 crc kubenswrapper[4763]: I0131 14:59:16.068793 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 31 14:59:16 crc kubenswrapper[4763]: I0131 14:59:16.068826 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 31 14:59:16 crc kubenswrapper[4763]: I0131 14:59:16.070045 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 31 14:59:16 crc kubenswrapper[4763]: I0131 14:59:16.446498 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 31 14:59:16 crc kubenswrapper[4763]: I0131 14:59:16.487910 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 31 14:59:16 crc kubenswrapper[4763]: I0131 14:59:16.496426 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 31 14:59:16 crc kubenswrapper[4763]: I0131 14:59:16.534501 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 31 14:59:16 crc kubenswrapper[4763]: I0131 14:59:16.549935 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 31 14:59:16 crc kubenswrapper[4763]: I0131 14:59:16.566020 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 31 14:59:16 crc kubenswrapper[4763]: I0131 14:59:16.600280 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 31 14:59:16 crc kubenswrapper[4763]: I0131 14:59:16.602066 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 31 14:59:16 crc kubenswrapper[4763]: I0131 14:59:16.611343 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 31 14:59:16 crc kubenswrapper[4763]: I0131 14:59:16.807969 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 31 14:59:16 crc kubenswrapper[4763]: I0131 14:59:16.819812 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 31 14:59:16 crc kubenswrapper[4763]: I0131 14:59:16.872832 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 31 14:59:16 crc kubenswrapper[4763]: I0131 14:59:16.993224 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.059483 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.124381 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.136900 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.169807 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.196520 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.219966 4763 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.224377 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=43.224357025 podStartE2EDuration="43.224357025s" podCreationTimestamp="2026-01-31 14:58:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:58:58.335234418 +0000 UTC m=+258.089972711" watchObservedRunningTime="2026-01-31 14:59:17.224357025 +0000 UTC m=+276.979095318" Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.224790 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.224830 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.228306 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.263646 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.263624403 podStartE2EDuration="19.263624403s" podCreationTimestamp="2026-01-31 14:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:59:17.242332343 +0000 UTC m=+276.997070676" watchObservedRunningTime="2026-01-31 14:59:17.263624403 +0000 UTC m=+277.018362696" Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.309477 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.360633 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.491668 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.572466 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.627009 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.709622 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.745750 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.753239 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.798542 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.898741 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.034157 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.160035 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.165234 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.241462 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.257483 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.260814 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.264644 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.272212 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.304641 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.340847 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.465605 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.570501 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.615753 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.644194 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.648126 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.730287 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.730386 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.739177 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.742528 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.819541 4763 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.851157 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.931388 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.979808 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.981965 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 31 14:59:19 crc kubenswrapper[4763]: I0131 14:59:19.136468 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 31 14:59:19 crc kubenswrapper[4763]: I0131 14:59:19.167479 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 31 14:59:19 crc kubenswrapper[4763]: I0131 14:59:19.172894 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 31 14:59:19 crc kubenswrapper[4763]: I0131 14:59:19.251423 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 31 14:59:19 crc kubenswrapper[4763]: I0131 14:59:19.273219 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 31 14:59:19 crc kubenswrapper[4763]: I0131 14:59:19.357869 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 31 14:59:19 crc kubenswrapper[4763]: I0131 14:59:19.427655 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 31 14:59:19 crc kubenswrapper[4763]: I0131 14:59:19.470940 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 31 14:59:19 crc kubenswrapper[4763]: I0131 14:59:19.471607 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 31 14:59:19 crc kubenswrapper[4763]: I0131 14:59:19.557555 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 31 14:59:19 crc kubenswrapper[4763]: I0131 14:59:19.595143 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 31 14:59:19 crc kubenswrapper[4763]: I0131 14:59:19.709514 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 31 14:59:19 crc kubenswrapper[4763]: I0131 14:59:19.710168 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 31 14:59:19 crc kubenswrapper[4763]: I0131 14:59:19.795760 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 31 14:59:19 crc kubenswrapper[4763]: I0131 14:59:19.872690 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 31 14:59:19 crc kubenswrapper[4763]: I0131 14:59:19.996890 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.064983 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.106922 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.184138 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.379340 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.430179 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.445645 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.490047 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.528208 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.597098 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.617540 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.649868 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.804850 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.823471 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.834476 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.836246 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.839758 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.853110 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.858318 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.889539 4763 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.889839 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://7c5454adb66dbfe52e542cda6188d149bbe0998a17793a877cd0c2d065f9ddf8" gracePeriod=5 Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.904161 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 31 14:59:21 crc kubenswrapper[4763]: I0131 14:59:21.067345 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 31 14:59:21 crc kubenswrapper[4763]: I0131 14:59:21.080494 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 31 14:59:21 crc kubenswrapper[4763]: I0131 14:59:21.185505 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 31 14:59:21 crc kubenswrapper[4763]: I0131 14:59:21.342484 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 31 14:59:21 crc kubenswrapper[4763]: I0131 14:59:21.367346 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 31 14:59:21 crc kubenswrapper[4763]: I0131 14:59:21.477068 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 31 14:59:21 crc kubenswrapper[4763]: I0131 14:59:21.538136 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 31 14:59:21 crc kubenswrapper[4763]: I0131 14:59:21.549933 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 14:59:21 crc kubenswrapper[4763]: I0131 14:59:21.579620 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 31 14:59:21 crc kubenswrapper[4763]: I0131 14:59:21.631944 4763 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 31 14:59:21 crc kubenswrapper[4763]: I0131 14:59:21.745426 4763 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 31 14:59:21 crc kubenswrapper[4763]: I0131 14:59:21.989321 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 31 14:59:22 crc kubenswrapper[4763]: I0131 14:59:22.001085 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 31 14:59:22 crc kubenswrapper[4763]: I0131 14:59:22.118657 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 31 14:59:22 crc kubenswrapper[4763]: I0131 14:59:22.325916 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 31 14:59:22 crc kubenswrapper[4763]: I0131 14:59:22.552837 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 31 14:59:22 crc kubenswrapper[4763]: I0131 14:59:22.583400 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 31 14:59:22 crc kubenswrapper[4763]: I0131 14:59:22.616914 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 31 14:59:23 crc kubenswrapper[4763]: I0131 14:59:23.124142 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 31 14:59:23 crc kubenswrapper[4763]: I0131 14:59:23.283268 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 31 14:59:23 crc kubenswrapper[4763]: I0131 14:59:23.342429 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 31 14:59:23 crc kubenswrapper[4763]: I0131 14:59:23.349790 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 31 14:59:23 crc kubenswrapper[4763]: I0131 14:59:23.488178 4763 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 31 14:59:23 crc kubenswrapper[4763]: I0131 14:59:23.710566 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 31 14:59:23 crc kubenswrapper[4763]: I0131 14:59:23.822643 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 31 14:59:24 crc kubenswrapper[4763]: I0131 14:59:24.034839 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 31 14:59:26 crc kubenswrapper[4763]: I0131 14:59:26.406859 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 31 14:59:26 crc kubenswrapper[4763]: I0131 14:59:26.406941 4763 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="7c5454adb66dbfe52e542cda6188d149bbe0998a17793a877cd0c2d065f9ddf8" exitCode=137 Jan 31 14:59:26 crc kubenswrapper[4763]: I0131 14:59:26.468348 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 31 14:59:26 crc kubenswrapper[4763]: I0131 14:59:26.468764 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:59:26 crc kubenswrapper[4763]: I0131 14:59:26.641348 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 14:59:26 crc kubenswrapper[4763]: I0131 14:59:26.641439 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 14:59:26 crc kubenswrapper[4763]: I0131 14:59:26.641431 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:59:26 crc kubenswrapper[4763]: I0131 14:59:26.641534 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:59:26 crc kubenswrapper[4763]: I0131 14:59:26.641599 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 14:59:26 crc kubenswrapper[4763]: I0131 14:59:26.641630 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 14:59:26 crc kubenswrapper[4763]: I0131 14:59:26.641654 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 14:59:26 crc kubenswrapper[4763]: I0131 14:59:26.641745 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:59:26 crc kubenswrapper[4763]: I0131 14:59:26.641840 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:59:26 crc kubenswrapper[4763]: I0131 14:59:26.641996 4763 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 31 14:59:26 crc kubenswrapper[4763]: I0131 14:59:26.642011 4763 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 31 14:59:26 crc kubenswrapper[4763]: I0131 14:59:26.642023 4763 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 14:59:26 crc kubenswrapper[4763]: I0131 14:59:26.642033 4763 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 31 14:59:26 crc kubenswrapper[4763]: I0131 14:59:26.652423 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:59:26 crc kubenswrapper[4763]: I0131 14:59:26.742515 4763 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 14:59:27 crc kubenswrapper[4763]: I0131 14:59:27.063561 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 31 14:59:27 crc kubenswrapper[4763]: I0131 14:59:27.064043 4763 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 31 14:59:27 crc kubenswrapper[4763]: I0131 14:59:27.075552 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 14:59:27 crc kubenswrapper[4763]: I0131 14:59:27.075616 4763 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="0a0f5c67-c726-4345-9630-e2b665ce511c" Jan 31 14:59:27 crc kubenswrapper[4763]: I0131 14:59:27.079614 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 14:59:27 crc kubenswrapper[4763]: I0131 14:59:27.079922 4763 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="0a0f5c67-c726-4345-9630-e2b665ce511c" Jan 31 14:59:27 crc kubenswrapper[4763]: I0131 14:59:27.415068 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 31 14:59:27 crc kubenswrapper[4763]: I0131 14:59:27.415225 4763 scope.go:117] "RemoveContainer" containerID="7c5454adb66dbfe52e542cda6188d149bbe0998a17793a877cd0c2d065f9ddf8" Jan 31 14:59:27 crc kubenswrapper[4763]: I0131 14:59:27.415465 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:59:34 crc kubenswrapper[4763]: I0131 14:59:34.402061 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 31 14:59:39 crc kubenswrapper[4763]: I0131 14:59:39.348315 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 31 14:59:39 crc kubenswrapper[4763]: I0131 14:59:39.489156 4763 generic.go:334] "Generic (PLEG): container finished" podID="c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc" containerID="f08a6750497f2109f855a8140586391b641ccc75c075f25e6e29d0621151a5ca" exitCode=0 Jan 31 14:59:39 crc kubenswrapper[4763]: I0131 14:59:39.489199 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" event={"ID":"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc","Type":"ContainerDied","Data":"f08a6750497f2109f855a8140586391b641ccc75c075f25e6e29d0621151a5ca"} Jan 31 14:59:39 crc kubenswrapper[4763]: I0131 14:59:39.489862 4763 scope.go:117] "RemoveContainer" containerID="f08a6750497f2109f855a8140586391b641ccc75c075f25e6e29d0621151a5ca" Jan 31 14:59:39 crc kubenswrapper[4763]: I0131 14:59:39.782218 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 31 14:59:40 crc kubenswrapper[4763]: I0131 14:59:40.498588 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" event={"ID":"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc","Type":"ContainerStarted","Data":"94c37c96940e6c7718ef7457db1b93bffde516c75beb21cabacd703afe7df445"} Jan 31 14:59:40 crc kubenswrapper[4763]: I0131 14:59:40.499119 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" Jan 31 14:59:40 crc kubenswrapper[4763]: I0131 14:59:40.503070 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" Jan 31 14:59:40 crc kubenswrapper[4763]: I0131 14:59:40.829831 4763 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 31 14:59:41 crc kubenswrapper[4763]: I0131 14:59:41.473466 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 31 14:59:42 crc kubenswrapper[4763]: I0131 14:59:42.561019 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 31 14:59:43 crc kubenswrapper[4763]: I0131 14:59:43.128180 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 31 14:59:43 crc kubenswrapper[4763]: I0131 14:59:43.798676 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 31 14:59:43 crc kubenswrapper[4763]: I0131 14:59:43.973845 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 31 14:59:45 crc kubenswrapper[4763]: I0131 14:59:45.850367 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 31 14:59:46 crc kubenswrapper[4763]: I0131 14:59:46.574164 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 31 14:59:46 crc kubenswrapper[4763]: I0131 14:59:46.929689 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 31 14:59:48 crc kubenswrapper[4763]: I0131 14:59:48.539474 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 31 14:59:49 crc kubenswrapper[4763]: I0131 14:59:49.762199 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 14:59:52 crc kubenswrapper[4763]: I0131 14:59:52.543594 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 31 14:59:53 crc kubenswrapper[4763]: I0131 14:59:53.291424 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 31 14:59:53 crc kubenswrapper[4763]: I0131 14:59:53.529046 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 31 14:59:53 crc kubenswrapper[4763]: I0131 14:59:53.646796 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 31 14:59:56 crc kubenswrapper[4763]: I0131 14:59:56.943279 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 31 15:00:00 crc kubenswrapper[4763]: I0131 15:00:00.190661 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx"] Jan 31 15:00:00 crc kubenswrapper[4763]: E0131 15:00:00.191284 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e242425d-c262-45a8-b933-a84abec6740e" containerName="installer" Jan 31 15:00:00 crc kubenswrapper[4763]: I0131 15:00:00.191302 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="e242425d-c262-45a8-b933-a84abec6740e" containerName="installer" Jan 31 15:00:00 crc kubenswrapper[4763]: E0131 15:00:00.191325 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 15:00:00 crc kubenswrapper[4763]: I0131 15:00:00.191335 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 15:00:00 crc kubenswrapper[4763]: I0131 15:00:00.191477 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="e242425d-c262-45a8-b933-a84abec6740e" containerName="installer" Jan 31 15:00:00 crc kubenswrapper[4763]: I0131 15:00:00.191502 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 15:00:00 crc kubenswrapper[4763]: I0131 15:00:00.192018 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" Jan 31 15:00:00 crc kubenswrapper[4763]: I0131 15:00:00.194487 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 15:00:00 crc kubenswrapper[4763]: I0131 15:00:00.196259 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 15:00:00 crc kubenswrapper[4763]: I0131 15:00:00.201331 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8jm9\" (UniqueName: \"kubernetes.io/projected/77150388-1064-46e6-9636-ebfa1eacf88f-kube-api-access-h8jm9\") pod \"collect-profiles-29497860-lhblx\" (UID: \"77150388-1064-46e6-9636-ebfa1eacf88f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" Jan 31 15:00:00 crc kubenswrapper[4763]: I0131 15:00:00.201412 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77150388-1064-46e6-9636-ebfa1eacf88f-secret-volume\") pod \"collect-profiles-29497860-lhblx\" (UID: \"77150388-1064-46e6-9636-ebfa1eacf88f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" Jan 31 15:00:00 crc kubenswrapper[4763]: I0131 15:00:00.201433 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77150388-1064-46e6-9636-ebfa1eacf88f-config-volume\") pod \"collect-profiles-29497860-lhblx\" (UID: \"77150388-1064-46e6-9636-ebfa1eacf88f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" Jan 31 15:00:00 crc kubenswrapper[4763]: I0131 15:00:00.204045 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx"] Jan 31 15:00:00 crc kubenswrapper[4763]: I0131 15:00:00.302269 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77150388-1064-46e6-9636-ebfa1eacf88f-secret-volume\") pod \"collect-profiles-29497860-lhblx\" (UID: \"77150388-1064-46e6-9636-ebfa1eacf88f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" Jan 31 15:00:00 crc kubenswrapper[4763]: I0131 15:00:00.302367 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77150388-1064-46e6-9636-ebfa1eacf88f-config-volume\") pod \"collect-profiles-29497860-lhblx\" (UID: \"77150388-1064-46e6-9636-ebfa1eacf88f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" Jan 31 15:00:00 crc kubenswrapper[4763]: I0131 15:00:00.302919 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8jm9\" (UniqueName: \"kubernetes.io/projected/77150388-1064-46e6-9636-ebfa1eacf88f-kube-api-access-h8jm9\") pod \"collect-profiles-29497860-lhblx\" (UID: \"77150388-1064-46e6-9636-ebfa1eacf88f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" Jan 31 15:00:00 crc kubenswrapper[4763]: I0131 15:00:00.304202 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77150388-1064-46e6-9636-ebfa1eacf88f-config-volume\") pod \"collect-profiles-29497860-lhblx\" (UID: \"77150388-1064-46e6-9636-ebfa1eacf88f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" Jan 31 15:00:00 crc kubenswrapper[4763]: I0131 15:00:00.314455 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77150388-1064-46e6-9636-ebfa1eacf88f-secret-volume\") pod \"collect-profiles-29497860-lhblx\" (UID: \"77150388-1064-46e6-9636-ebfa1eacf88f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" Jan 31 15:00:00 crc kubenswrapper[4763]: I0131 15:00:00.339327 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8jm9\" (UniqueName: \"kubernetes.io/projected/77150388-1064-46e6-9636-ebfa1eacf88f-kube-api-access-h8jm9\") pod \"collect-profiles-29497860-lhblx\" (UID: \"77150388-1064-46e6-9636-ebfa1eacf88f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" Jan 31 15:00:00 crc kubenswrapper[4763]: I0131 15:00:00.513384 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" Jan 31 15:00:02 crc kubenswrapper[4763]: I0131 15:00:02.065078 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 31 15:00:03 crc kubenswrapper[4763]: I0131 15:00:03.104399 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 31 15:00:03 crc kubenswrapper[4763]: E0131 15:00:03.710062 4763 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 31 15:00:03 crc kubenswrapper[4763]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29497860-lhblx_openshift-operator-lifecycle-manager_77150388-1064-46e6-9636-ebfa1eacf88f_0(8deeefd78b24379a86142eb29db8225dfd067b98a0bd2ac5d876e5b923c12ede): error adding pod openshift-operator-lifecycle-manager_collect-profiles-29497860-lhblx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8deeefd78b24379a86142eb29db8225dfd067b98a0bd2ac5d876e5b923c12ede" Netns:"/var/run/netns/82607e9f-932d-4c94-8fbc-d492a7b54d60" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29497860-lhblx;K8S_POD_INFRA_CONTAINER_ID=8deeefd78b24379a86142eb29db8225dfd067b98a0bd2ac5d876e5b923c12ede;K8S_POD_UID=77150388-1064-46e6-9636-ebfa1eacf88f" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx] networking: Multus: [openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx/77150388-1064-46e6-9636-ebfa1eacf88f]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod collect-profiles-29497860-lhblx in out of cluster comm: pod "collect-profiles-29497860-lhblx" not found Jan 31 15:00:03 crc kubenswrapper[4763]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 31 15:00:03 crc kubenswrapper[4763]: > Jan 31 15:00:03 crc kubenswrapper[4763]: E0131 15:00:03.710534 4763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 31 15:00:03 crc kubenswrapper[4763]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29497860-lhblx_openshift-operator-lifecycle-manager_77150388-1064-46e6-9636-ebfa1eacf88f_0(8deeefd78b24379a86142eb29db8225dfd067b98a0bd2ac5d876e5b923c12ede): error adding pod openshift-operator-lifecycle-manager_collect-profiles-29497860-lhblx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8deeefd78b24379a86142eb29db8225dfd067b98a0bd2ac5d876e5b923c12ede" Netns:"/var/run/netns/82607e9f-932d-4c94-8fbc-d492a7b54d60" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29497860-lhblx;K8S_POD_INFRA_CONTAINER_ID=8deeefd78b24379a86142eb29db8225dfd067b98a0bd2ac5d876e5b923c12ede;K8S_POD_UID=77150388-1064-46e6-9636-ebfa1eacf88f" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx] networking: Multus: [openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx/77150388-1064-46e6-9636-ebfa1eacf88f]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod collect-profiles-29497860-lhblx in out of cluster comm: pod "collect-profiles-29497860-lhblx" not found Jan 31 15:00:03 crc kubenswrapper[4763]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 31 15:00:03 crc kubenswrapper[4763]: > pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" Jan 31 15:00:03 crc kubenswrapper[4763]: E0131 15:00:03.710555 4763 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 31 15:00:03 crc kubenswrapper[4763]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29497860-lhblx_openshift-operator-lifecycle-manager_77150388-1064-46e6-9636-ebfa1eacf88f_0(8deeefd78b24379a86142eb29db8225dfd067b98a0bd2ac5d876e5b923c12ede): error adding pod openshift-operator-lifecycle-manager_collect-profiles-29497860-lhblx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8deeefd78b24379a86142eb29db8225dfd067b98a0bd2ac5d876e5b923c12ede" Netns:"/var/run/netns/82607e9f-932d-4c94-8fbc-d492a7b54d60" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29497860-lhblx;K8S_POD_INFRA_CONTAINER_ID=8deeefd78b24379a86142eb29db8225dfd067b98a0bd2ac5d876e5b923c12ede;K8S_POD_UID=77150388-1064-46e6-9636-ebfa1eacf88f" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx] networking: Multus: [openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx/77150388-1064-46e6-9636-ebfa1eacf88f]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod collect-profiles-29497860-lhblx in out of cluster comm: pod "collect-profiles-29497860-lhblx" not found Jan 31 15:00:03 crc kubenswrapper[4763]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 31 15:00:03 crc kubenswrapper[4763]: > pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" Jan 31 15:00:03 crc kubenswrapper[4763]: E0131 15:00:03.710615 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29497860-lhblx_openshift-operator-lifecycle-manager(77150388-1064-46e6-9636-ebfa1eacf88f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29497860-lhblx_openshift-operator-lifecycle-manager(77150388-1064-46e6-9636-ebfa1eacf88f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29497860-lhblx_openshift-operator-lifecycle-manager_77150388-1064-46e6-9636-ebfa1eacf88f_0(8deeefd78b24379a86142eb29db8225dfd067b98a0bd2ac5d876e5b923c12ede): error adding pod openshift-operator-lifecycle-manager_collect-profiles-29497860-lhblx to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"8deeefd78b24379a86142eb29db8225dfd067b98a0bd2ac5d876e5b923c12ede\\\" Netns:\\\"/var/run/netns/82607e9f-932d-4c94-8fbc-d492a7b54d60\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29497860-lhblx;K8S_POD_INFRA_CONTAINER_ID=8deeefd78b24379a86142eb29db8225dfd067b98a0bd2ac5d876e5b923c12ede;K8S_POD_UID=77150388-1064-46e6-9636-ebfa1eacf88f\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx] networking: Multus: [openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx/77150388-1064-46e6-9636-ebfa1eacf88f]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod collect-profiles-29497860-lhblx in out of cluster comm: pod \\\"collect-profiles-29497860-lhblx\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" podUID="77150388-1064-46e6-9636-ebfa1eacf88f" Jan 31 15:00:04 crc kubenswrapper[4763]: I0131 15:00:04.636219 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" Jan 31 15:00:04 crc kubenswrapper[4763]: I0131 15:00:04.636887 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" Jan 31 15:00:07 crc kubenswrapper[4763]: I0131 15:00:07.252535 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx"] Jan 31 15:00:07 crc kubenswrapper[4763]: I0131 15:00:07.655498 4763 generic.go:334] "Generic (PLEG): container finished" podID="77150388-1064-46e6-9636-ebfa1eacf88f" containerID="32d38ac7a45f00c53daa0635462129846e750f14e15f4abb1ec6484cb9065078" exitCode=0 Jan 31 15:00:07 crc kubenswrapper[4763]: I0131 15:00:07.655607 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" event={"ID":"77150388-1064-46e6-9636-ebfa1eacf88f","Type":"ContainerDied","Data":"32d38ac7a45f00c53daa0635462129846e750f14e15f4abb1ec6484cb9065078"} Jan 31 15:00:07 crc kubenswrapper[4763]: I0131 15:00:07.655840 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" event={"ID":"77150388-1064-46e6-9636-ebfa1eacf88f","Type":"ContainerStarted","Data":"0aa8f4ac38d1afcc87fa2d3639b470eb20c3d54da8f24f5a5f3dca69a36f6398"} Jan 31 15:00:08 crc kubenswrapper[4763]: I0131 15:00:08.976150 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" Jan 31 15:00:09 crc kubenswrapper[4763]: I0131 15:00:09.117248 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77150388-1064-46e6-9636-ebfa1eacf88f-config-volume\") pod \"77150388-1064-46e6-9636-ebfa1eacf88f\" (UID: \"77150388-1064-46e6-9636-ebfa1eacf88f\") " Jan 31 15:00:09 crc kubenswrapper[4763]: I0131 15:00:09.117291 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77150388-1064-46e6-9636-ebfa1eacf88f-secret-volume\") pod \"77150388-1064-46e6-9636-ebfa1eacf88f\" (UID: \"77150388-1064-46e6-9636-ebfa1eacf88f\") " Jan 31 15:00:09 crc kubenswrapper[4763]: I0131 15:00:09.117336 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8jm9\" (UniqueName: \"kubernetes.io/projected/77150388-1064-46e6-9636-ebfa1eacf88f-kube-api-access-h8jm9\") pod \"77150388-1064-46e6-9636-ebfa1eacf88f\" (UID: \"77150388-1064-46e6-9636-ebfa1eacf88f\") " Jan 31 15:00:09 crc kubenswrapper[4763]: I0131 15:00:09.118197 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77150388-1064-46e6-9636-ebfa1eacf88f-config-volume" (OuterVolumeSpecName: "config-volume") pod "77150388-1064-46e6-9636-ebfa1eacf88f" (UID: "77150388-1064-46e6-9636-ebfa1eacf88f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:00:09 crc kubenswrapper[4763]: I0131 15:00:09.123748 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77150388-1064-46e6-9636-ebfa1eacf88f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "77150388-1064-46e6-9636-ebfa1eacf88f" (UID: "77150388-1064-46e6-9636-ebfa1eacf88f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:00:09 crc kubenswrapper[4763]: I0131 15:00:09.124896 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77150388-1064-46e6-9636-ebfa1eacf88f-kube-api-access-h8jm9" (OuterVolumeSpecName: "kube-api-access-h8jm9") pod "77150388-1064-46e6-9636-ebfa1eacf88f" (UID: "77150388-1064-46e6-9636-ebfa1eacf88f"). InnerVolumeSpecName "kube-api-access-h8jm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:00:09 crc kubenswrapper[4763]: I0131 15:00:09.218834 4763 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77150388-1064-46e6-9636-ebfa1eacf88f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:09 crc kubenswrapper[4763]: I0131 15:00:09.218873 4763 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77150388-1064-46e6-9636-ebfa1eacf88f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:09 crc kubenswrapper[4763]: I0131 15:00:09.218887 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8jm9\" (UniqueName: \"kubernetes.io/projected/77150388-1064-46e6-9636-ebfa1eacf88f-kube-api-access-h8jm9\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:09 crc kubenswrapper[4763]: I0131 15:00:09.668389 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" event={"ID":"77150388-1064-46e6-9636-ebfa1eacf88f","Type":"ContainerDied","Data":"0aa8f4ac38d1afcc87fa2d3639b470eb20c3d54da8f24f5a5f3dca69a36f6398"} Jan 31 15:00:09 crc kubenswrapper[4763]: I0131 15:00:09.668680 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0aa8f4ac38d1afcc87fa2d3639b470eb20c3d54da8f24f5a5f3dca69a36f6398" Jan 31 15:00:09 crc kubenswrapper[4763]: I0131 15:00:09.668457 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.758687 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8bf77"] Jan 31 15:00:41 crc kubenswrapper[4763]: E0131 15:00:41.759251 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77150388-1064-46e6-9636-ebfa1eacf88f" containerName="collect-profiles" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.759263 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="77150388-1064-46e6-9636-ebfa1eacf88f" containerName="collect-profiles" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.759365 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="77150388-1064-46e6-9636-ebfa1eacf88f" containerName="collect-profiles" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.759742 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.774211 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8bf77"] Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.866290 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9b3ec224-3471-48bb-a15e-e4a6d5635279-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.866343 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9b3ec224-3471-48bb-a15e-e4a6d5635279-bound-sa-token\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.866378 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b3ec224-3471-48bb-a15e-e4a6d5635279-trusted-ca\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.866403 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9b3ec224-3471-48bb-a15e-e4a6d5635279-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.866513 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.866590 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9b3ec224-3471-48bb-a15e-e4a6d5635279-registry-tls\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.866625 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9b3ec224-3471-48bb-a15e-e4a6d5635279-registry-certificates\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.866675 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2j8n\" (UniqueName: \"kubernetes.io/projected/9b3ec224-3471-48bb-a15e-e4a6d5635279-kube-api-access-z2j8n\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.892630 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.968246 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9b3ec224-3471-48bb-a15e-e4a6d5635279-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.968320 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9b3ec224-3471-48bb-a15e-e4a6d5635279-bound-sa-token\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.968363 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b3ec224-3471-48bb-a15e-e4a6d5635279-trusted-ca\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.968387 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9b3ec224-3471-48bb-a15e-e4a6d5635279-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.968435 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9b3ec224-3471-48bb-a15e-e4a6d5635279-registry-tls\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.968457 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9b3ec224-3471-48bb-a15e-e4a6d5635279-registry-certificates\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.968484 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2j8n\" (UniqueName: \"kubernetes.io/projected/9b3ec224-3471-48bb-a15e-e4a6d5635279-kube-api-access-z2j8n\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.968978 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9b3ec224-3471-48bb-a15e-e4a6d5635279-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.970202 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b3ec224-3471-48bb-a15e-e4a6d5635279-trusted-ca\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.970227 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9b3ec224-3471-48bb-a15e-e4a6d5635279-registry-certificates\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.975024 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9b3ec224-3471-48bb-a15e-e4a6d5635279-registry-tls\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.976276 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9b3ec224-3471-48bb-a15e-e4a6d5635279-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.987178 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9b3ec224-3471-48bb-a15e-e4a6d5635279-bound-sa-token\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.993227 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2j8n\" (UniqueName: \"kubernetes.io/projected/9b3ec224-3471-48bb-a15e-e4a6d5635279-kube-api-access-z2j8n\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:42 crc kubenswrapper[4763]: I0131 15:00:42.081018 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:42 crc kubenswrapper[4763]: I0131 15:00:42.543929 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8bf77"] Jan 31 15:00:42 crc kubenswrapper[4763]: I0131 15:00:42.861659 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" event={"ID":"9b3ec224-3471-48bb-a15e-e4a6d5635279","Type":"ContainerStarted","Data":"b7413ced75d10a5e383b9f6e94e8568213cb32cdf9bf5ce5a3ef72224d47b796"} Jan 31 15:00:42 crc kubenswrapper[4763]: I0131 15:00:42.862022 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" event={"ID":"9b3ec224-3471-48bb-a15e-e4a6d5635279","Type":"ContainerStarted","Data":"0f9ffcfe233eade7219ea2e3b8ef877b660d7505c1f1bdcf35ad12638cd4e3e4"} Jan 31 15:00:42 crc kubenswrapper[4763]: I0131 15:00:42.862055 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:42 crc kubenswrapper[4763]: I0131 15:00:42.882920 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" podStartSLOduration=1.882896836 podStartE2EDuration="1.882896836s" podCreationTimestamp="2026-01-31 15:00:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:00:42.881483107 +0000 UTC m=+362.636221400" watchObservedRunningTime="2026-01-31 15:00:42.882896836 +0000 UTC m=+362.637635179" Jan 31 15:00:44 crc kubenswrapper[4763]: I0131 15:00:44.177399 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:00:44 crc kubenswrapper[4763]: I0131 15:00:44.177729 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:00:49 crc kubenswrapper[4763]: I0131 15:00:49.842327 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k6ddv"] Jan 31 15:00:49 crc kubenswrapper[4763]: I0131 15:00:49.843090 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k6ddv" podUID="b8a35a73-67a0-4bb4-9954-46350d31b017" containerName="registry-server" containerID="cri-o://b11296a17e50e5a5d66eaf92faf74300be0a5405e0878ffb5ba19adff4be04ed" gracePeriod=30 Jan 31 15:00:49 crc kubenswrapper[4763]: I0131 15:00:49.860194 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9df4p"] Jan 31 15:00:49 crc kubenswrapper[4763]: I0131 15:00:49.860475 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9df4p" podUID="5c097873-7ca4-491d-86c4-31b2ab99d63d" containerName="registry-server" containerID="cri-o://34c7f9fccb1c26a38501bfbea4a61629615e747ea42bb481450c6e5c4a088aa5" gracePeriod=30 Jan 31 15:00:49 crc kubenswrapper[4763]: I0131 15:00:49.877483 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-flcgf"] Jan 31 15:00:49 crc kubenswrapper[4763]: I0131 15:00:49.877954 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" podUID="c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc" containerName="marketplace-operator" containerID="cri-o://94c37c96940e6c7718ef7457db1b93bffde516c75beb21cabacd703afe7df445" gracePeriod=30 Jan 31 15:00:49 crc kubenswrapper[4763]: I0131 15:00:49.885597 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wnmmq"] Jan 31 15:00:49 crc kubenswrapper[4763]: I0131 15:00:49.886145 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wnmmq" podUID="2434f0b9-846a-444c-b487-745d4010002b" containerName="registry-server" containerID="cri-o://0711770c6696d1677cd00d3fae5f53e4c8b8978c7d7d63abe1ed9abf767c4b97" gracePeriod=30 Jan 31 15:00:49 crc kubenswrapper[4763]: I0131 15:00:49.901222 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wxzg8"] Jan 31 15:00:49 crc kubenswrapper[4763]: I0131 15:00:49.901656 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wxzg8" podUID="5f3cc890-2041-4983-8501-088c40c22b77" containerName="registry-server" containerID="cri-o://6979898503c3db1e52b9a9085e8ed5e42c42873bedab7efaef4db47c207a2bd1" gracePeriod=30 Jan 31 15:00:49 crc kubenswrapper[4763]: I0131 15:00:49.916754 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gg2dq"] Jan 31 15:00:49 crc kubenswrapper[4763]: I0131 15:00:49.918107 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gg2dq" Jan 31 15:00:49 crc kubenswrapper[4763]: I0131 15:00:49.928576 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gg2dq"] Jan 31 15:00:49 crc kubenswrapper[4763]: I0131 15:00:49.987087 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/38baa8fd-7b8e-4c7b-ac03-d739f10d242a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gg2dq\" (UID: \"38baa8fd-7b8e-4c7b-ac03-d739f10d242a\") " pod="openshift-marketplace/marketplace-operator-79b997595-gg2dq" Jan 31 15:00:49 crc kubenswrapper[4763]: I0131 15:00:49.987484 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/38baa8fd-7b8e-4c7b-ac03-d739f10d242a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gg2dq\" (UID: \"38baa8fd-7b8e-4c7b-ac03-d739f10d242a\") " pod="openshift-marketplace/marketplace-operator-79b997595-gg2dq" Jan 31 15:00:49 crc kubenswrapper[4763]: I0131 15:00:49.987522 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp6df\" (UniqueName: \"kubernetes.io/projected/38baa8fd-7b8e-4c7b-ac03-d739f10d242a-kube-api-access-vp6df\") pod \"marketplace-operator-79b997595-gg2dq\" (UID: \"38baa8fd-7b8e-4c7b-ac03-d739f10d242a\") " pod="openshift-marketplace/marketplace-operator-79b997595-gg2dq" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.090525 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/38baa8fd-7b8e-4c7b-ac03-d739f10d242a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gg2dq\" (UID: \"38baa8fd-7b8e-4c7b-ac03-d739f10d242a\") " pod="openshift-marketplace/marketplace-operator-79b997595-gg2dq" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.090624 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp6df\" (UniqueName: \"kubernetes.io/projected/38baa8fd-7b8e-4c7b-ac03-d739f10d242a-kube-api-access-vp6df\") pod \"marketplace-operator-79b997595-gg2dq\" (UID: \"38baa8fd-7b8e-4c7b-ac03-d739f10d242a\") " pod="openshift-marketplace/marketplace-operator-79b997595-gg2dq" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.090832 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/38baa8fd-7b8e-4c7b-ac03-d739f10d242a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gg2dq\" (UID: \"38baa8fd-7b8e-4c7b-ac03-d739f10d242a\") " pod="openshift-marketplace/marketplace-operator-79b997595-gg2dq" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.091928 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/38baa8fd-7b8e-4c7b-ac03-d739f10d242a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gg2dq\" (UID: \"38baa8fd-7b8e-4c7b-ac03-d739f10d242a\") " pod="openshift-marketplace/marketplace-operator-79b997595-gg2dq" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.099087 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/38baa8fd-7b8e-4c7b-ac03-d739f10d242a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gg2dq\" (UID: \"38baa8fd-7b8e-4c7b-ac03-d739f10d242a\") " pod="openshift-marketplace/marketplace-operator-79b997595-gg2dq" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.108339 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp6df\" (UniqueName: \"kubernetes.io/projected/38baa8fd-7b8e-4c7b-ac03-d739f10d242a-kube-api-access-vp6df\") pod \"marketplace-operator-79b997595-gg2dq\" (UID: \"38baa8fd-7b8e-4c7b-ac03-d739f10d242a\") " pod="openshift-marketplace/marketplace-operator-79b997595-gg2dq" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.313569 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gg2dq" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.332195 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k6ddv" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.348374 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wxzg8" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.349094 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wnmmq" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.349445 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9df4p" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.355286 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.400363 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vk9ln\" (UniqueName: \"kubernetes.io/projected/b8a35a73-67a0-4bb4-9954-46350d31b017-kube-api-access-vk9ln\") pod \"b8a35a73-67a0-4bb4-9954-46350d31b017\" (UID: \"b8a35a73-67a0-4bb4-9954-46350d31b017\") " Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.400423 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8a35a73-67a0-4bb4-9954-46350d31b017-utilities\") pod \"b8a35a73-67a0-4bb4-9954-46350d31b017\" (UID: \"b8a35a73-67a0-4bb4-9954-46350d31b017\") " Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.400486 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8a35a73-67a0-4bb4-9954-46350d31b017-catalog-content\") pod \"b8a35a73-67a0-4bb4-9954-46350d31b017\" (UID: \"b8a35a73-67a0-4bb4-9954-46350d31b017\") " Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.402317 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8a35a73-67a0-4bb4-9954-46350d31b017-utilities" (OuterVolumeSpecName: "utilities") pod "b8a35a73-67a0-4bb4-9954-46350d31b017" (UID: "b8a35a73-67a0-4bb4-9954-46350d31b017"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.404400 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8a35a73-67a0-4bb4-9954-46350d31b017-kube-api-access-vk9ln" (OuterVolumeSpecName: "kube-api-access-vk9ln") pod "b8a35a73-67a0-4bb4-9954-46350d31b017" (UID: "b8a35a73-67a0-4bb4-9954-46350d31b017"). InnerVolumeSpecName "kube-api-access-vk9ln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.488619 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8a35a73-67a0-4bb4-9954-46350d31b017-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b8a35a73-67a0-4bb4-9954-46350d31b017" (UID: "b8a35a73-67a0-4bb4-9954-46350d31b017"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.504903 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f3cc890-2041-4983-8501-088c40c22b77-utilities\") pod \"5f3cc890-2041-4983-8501-088c40c22b77\" (UID: \"5f3cc890-2041-4983-8501-088c40c22b77\") " Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.505001 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2434f0b9-846a-444c-b487-745d4010002b-utilities\") pod \"2434f0b9-846a-444c-b487-745d4010002b\" (UID: \"2434f0b9-846a-444c-b487-745d4010002b\") " Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.505074 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bxzh\" (UniqueName: \"kubernetes.io/projected/5f3cc890-2041-4983-8501-088c40c22b77-kube-api-access-7bxzh\") pod \"5f3cc890-2041-4983-8501-088c40c22b77\" (UID: \"5f3cc890-2041-4983-8501-088c40c22b77\") " Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.505106 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2434f0b9-846a-444c-b487-745d4010002b-catalog-content\") pod \"2434f0b9-846a-444c-b487-745d4010002b\" (UID: \"2434f0b9-846a-444c-b487-745d4010002b\") " Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.505171 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfdsx\" (UniqueName: \"kubernetes.io/projected/c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc-kube-api-access-sfdsx\") pod \"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc\" (UID: \"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc\") " Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.505197 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc-marketplace-operator-metrics\") pod \"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc\" (UID: \"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc\") " Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.505262 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzr25\" (UniqueName: \"kubernetes.io/projected/2434f0b9-846a-444c-b487-745d4010002b-kube-api-access-tzr25\") pod \"2434f0b9-846a-444c-b487-745d4010002b\" (UID: \"2434f0b9-846a-444c-b487-745d4010002b\") " Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.505322 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrvp2\" (UniqueName: \"kubernetes.io/projected/5c097873-7ca4-491d-86c4-31b2ab99d63d-kube-api-access-rrvp2\") pod \"5c097873-7ca4-491d-86c4-31b2ab99d63d\" (UID: \"5c097873-7ca4-491d-86c4-31b2ab99d63d\") " Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.505355 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f3cc890-2041-4983-8501-088c40c22b77-catalog-content\") pod \"5f3cc890-2041-4983-8501-088c40c22b77\" (UID: \"5f3cc890-2041-4983-8501-088c40c22b77\") " Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.505427 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc-marketplace-trusted-ca\") pod \"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc\" (UID: \"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc\") " Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.505547 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c097873-7ca4-491d-86c4-31b2ab99d63d-utilities\") pod \"5c097873-7ca4-491d-86c4-31b2ab99d63d\" (UID: \"5c097873-7ca4-491d-86c4-31b2ab99d63d\") " Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.505600 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c097873-7ca4-491d-86c4-31b2ab99d63d-catalog-content\") pod \"5c097873-7ca4-491d-86c4-31b2ab99d63d\" (UID: \"5c097873-7ca4-491d-86c4-31b2ab99d63d\") " Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.506002 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vk9ln\" (UniqueName: \"kubernetes.io/projected/b8a35a73-67a0-4bb4-9954-46350d31b017-kube-api-access-vk9ln\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.506022 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8a35a73-67a0-4bb4-9954-46350d31b017-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.506034 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8a35a73-67a0-4bb4-9954-46350d31b017-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.506125 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f3cc890-2041-4983-8501-088c40c22b77-utilities" (OuterVolumeSpecName: "utilities") pod "5f3cc890-2041-4983-8501-088c40c22b77" (UID: "5f3cc890-2041-4983-8501-088c40c22b77"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.506271 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2434f0b9-846a-444c-b487-745d4010002b-utilities" (OuterVolumeSpecName: "utilities") pod "2434f0b9-846a-444c-b487-745d4010002b" (UID: "2434f0b9-846a-444c-b487-745d4010002b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.508305 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc" (UID: "c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.510169 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c097873-7ca4-491d-86c4-31b2ab99d63d-utilities" (OuterVolumeSpecName: "utilities") pod "5c097873-7ca4-491d-86c4-31b2ab99d63d" (UID: "5c097873-7ca4-491d-86c4-31b2ab99d63d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.511530 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f3cc890-2041-4983-8501-088c40c22b77-kube-api-access-7bxzh" (OuterVolumeSpecName: "kube-api-access-7bxzh") pod "5f3cc890-2041-4983-8501-088c40c22b77" (UID: "5f3cc890-2041-4983-8501-088c40c22b77"). InnerVolumeSpecName "kube-api-access-7bxzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.514478 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c097873-7ca4-491d-86c4-31b2ab99d63d-kube-api-access-rrvp2" (OuterVolumeSpecName: "kube-api-access-rrvp2") pod "5c097873-7ca4-491d-86c4-31b2ab99d63d" (UID: "5c097873-7ca4-491d-86c4-31b2ab99d63d"). InnerVolumeSpecName "kube-api-access-rrvp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.516209 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc" (UID: "c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.521815 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc-kube-api-access-sfdsx" (OuterVolumeSpecName: "kube-api-access-sfdsx") pod "c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc" (UID: "c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc"). InnerVolumeSpecName "kube-api-access-sfdsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.523917 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2434f0b9-846a-444c-b487-745d4010002b-kube-api-access-tzr25" (OuterVolumeSpecName: "kube-api-access-tzr25") pod "2434f0b9-846a-444c-b487-745d4010002b" (UID: "2434f0b9-846a-444c-b487-745d4010002b"). InnerVolumeSpecName "kube-api-access-tzr25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.543595 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2434f0b9-846a-444c-b487-745d4010002b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2434f0b9-846a-444c-b487-745d4010002b" (UID: "2434f0b9-846a-444c-b487-745d4010002b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.583508 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c097873-7ca4-491d-86c4-31b2ab99d63d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c097873-7ca4-491d-86c4-31b2ab99d63d" (UID: "5c097873-7ca4-491d-86c4-31b2ab99d63d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.608240 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c097873-7ca4-491d-86c4-31b2ab99d63d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.608319 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f3cc890-2041-4983-8501-088c40c22b77-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.608352 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2434f0b9-846a-444c-b487-745d4010002b-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.608382 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bxzh\" (UniqueName: \"kubernetes.io/projected/5f3cc890-2041-4983-8501-088c40c22b77-kube-api-access-7bxzh\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.608405 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2434f0b9-846a-444c-b487-745d4010002b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.608435 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfdsx\" (UniqueName: \"kubernetes.io/projected/c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc-kube-api-access-sfdsx\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.608450 4763 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.608466 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzr25\" (UniqueName: \"kubernetes.io/projected/2434f0b9-846a-444c-b487-745d4010002b-kube-api-access-tzr25\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.608483 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrvp2\" (UniqueName: \"kubernetes.io/projected/5c097873-7ca4-491d-86c4-31b2ab99d63d-kube-api-access-rrvp2\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.608514 4763 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.608527 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c097873-7ca4-491d-86c4-31b2ab99d63d-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.638006 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f3cc890-2041-4983-8501-088c40c22b77-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f3cc890-2041-4983-8501-088c40c22b77" (UID: "5f3cc890-2041-4983-8501-088c40c22b77"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.709472 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f3cc890-2041-4983-8501-088c40c22b77-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.762674 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gg2dq"] Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.925356 4763 generic.go:334] "Generic (PLEG): container finished" podID="c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc" containerID="94c37c96940e6c7718ef7457db1b93bffde516c75beb21cabacd703afe7df445" exitCode=0 Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.925463 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" event={"ID":"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc","Type":"ContainerDied","Data":"94c37c96940e6c7718ef7457db1b93bffde516c75beb21cabacd703afe7df445"} Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.925504 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" event={"ID":"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc","Type":"ContainerDied","Data":"ed7d3da6199e8bb4c55e177b1afca8ac78c017a1ea997eff233008f48616b7c8"} Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.925527 4763 scope.go:117] "RemoveContainer" containerID="94c37c96940e6c7718ef7457db1b93bffde516c75beb21cabacd703afe7df445" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.925714 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.933292 4763 generic.go:334] "Generic (PLEG): container finished" podID="5c097873-7ca4-491d-86c4-31b2ab99d63d" containerID="34c7f9fccb1c26a38501bfbea4a61629615e747ea42bb481450c6e5c4a088aa5" exitCode=0 Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.933342 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9df4p" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.933361 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9df4p" event={"ID":"5c097873-7ca4-491d-86c4-31b2ab99d63d","Type":"ContainerDied","Data":"34c7f9fccb1c26a38501bfbea4a61629615e747ea42bb481450c6e5c4a088aa5"} Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.933404 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9df4p" event={"ID":"5c097873-7ca4-491d-86c4-31b2ab99d63d","Type":"ContainerDied","Data":"1eaa5c467faffeb2ba7ad8dc241225ca0c8240c2cf3a8e19cde7c5ee1bfecc47"} Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.937366 4763 generic.go:334] "Generic (PLEG): container finished" podID="5f3cc890-2041-4983-8501-088c40c22b77" containerID="6979898503c3db1e52b9a9085e8ed5e42c42873bedab7efaef4db47c207a2bd1" exitCode=0 Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.937464 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxzg8" event={"ID":"5f3cc890-2041-4983-8501-088c40c22b77","Type":"ContainerDied","Data":"6979898503c3db1e52b9a9085e8ed5e42c42873bedab7efaef4db47c207a2bd1"} Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.937504 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxzg8" event={"ID":"5f3cc890-2041-4983-8501-088c40c22b77","Type":"ContainerDied","Data":"2abcdafc7fd1d0b73e0182854de8d66e2f3700062dfd88e6ccab246a85b4c70b"} Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.937622 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wxzg8" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.947245 4763 scope.go:117] "RemoveContainer" containerID="f08a6750497f2109f855a8140586391b641ccc75c075f25e6e29d0621151a5ca" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.951899 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gg2dq" event={"ID":"38baa8fd-7b8e-4c7b-ac03-d739f10d242a","Type":"ContainerStarted","Data":"31865f0c5dac807dfa0bc8cda96625a8500065ea786aab87d44f2a49709e828d"} Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.951941 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gg2dq" event={"ID":"38baa8fd-7b8e-4c7b-ac03-d739f10d242a","Type":"ContainerStarted","Data":"4449ccdd6773450d3c9ebbd3372c0b744e31f6b2aefbcef2cee13410cfd7c936"} Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.967496 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9df4p"] Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.971380 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9df4p"] Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.973399 4763 generic.go:334] "Generic (PLEG): container finished" podID="2434f0b9-846a-444c-b487-745d4010002b" containerID="0711770c6696d1677cd00d3fae5f53e4c8b8978c7d7d63abe1ed9abf767c4b97" exitCode=0 Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.973459 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wnmmq" event={"ID":"2434f0b9-846a-444c-b487-745d4010002b","Type":"ContainerDied","Data":"0711770c6696d1677cd00d3fae5f53e4c8b8978c7d7d63abe1ed9abf767c4b97"} Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.973484 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wnmmq" event={"ID":"2434f0b9-846a-444c-b487-745d4010002b","Type":"ContainerDied","Data":"3d06c6fc284feb92aefe314de6491da4d8ed4752eabe5f4dc5c6709aa6023802"} Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.973557 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wnmmq" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.984402 4763 generic.go:334] "Generic (PLEG): container finished" podID="b8a35a73-67a0-4bb4-9954-46350d31b017" containerID="b11296a17e50e5a5d66eaf92faf74300be0a5405e0878ffb5ba19adff4be04ed" exitCode=0 Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.984458 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6ddv" event={"ID":"b8a35a73-67a0-4bb4-9954-46350d31b017","Type":"ContainerDied","Data":"b11296a17e50e5a5d66eaf92faf74300be0a5405e0878ffb5ba19adff4be04ed"} Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.984486 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6ddv" event={"ID":"b8a35a73-67a0-4bb4-9954-46350d31b017","Type":"ContainerDied","Data":"6f670464716ecf8ab5d99a2382a3bcaf7162a13bd03fa816cb2c7b4734ade299"} Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.984572 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k6ddv" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.000149 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-gg2dq" podStartSLOduration=2.000126433 podStartE2EDuration="2.000126433s" podCreationTimestamp="2026-01-31 15:00:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:00:50.986976229 +0000 UTC m=+370.741714522" watchObservedRunningTime="2026-01-31 15:00:51.000126433 +0000 UTC m=+370.754864726" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.009994 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-flcgf"] Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.024677 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-flcgf"] Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.028443 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wxzg8"] Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.034763 4763 scope.go:117] "RemoveContainer" containerID="94c37c96940e6c7718ef7457db1b93bffde516c75beb21cabacd703afe7df445" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.035010 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wxzg8"] Jan 31 15:00:51 crc kubenswrapper[4763]: E0131 15:00:51.037167 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94c37c96940e6c7718ef7457db1b93bffde516c75beb21cabacd703afe7df445\": container with ID starting with 94c37c96940e6c7718ef7457db1b93bffde516c75beb21cabacd703afe7df445 not found: ID does not exist" containerID="94c37c96940e6c7718ef7457db1b93bffde516c75beb21cabacd703afe7df445" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.037205 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94c37c96940e6c7718ef7457db1b93bffde516c75beb21cabacd703afe7df445"} err="failed to get container status \"94c37c96940e6c7718ef7457db1b93bffde516c75beb21cabacd703afe7df445\": rpc error: code = NotFound desc = could not find container \"94c37c96940e6c7718ef7457db1b93bffde516c75beb21cabacd703afe7df445\": container with ID starting with 94c37c96940e6c7718ef7457db1b93bffde516c75beb21cabacd703afe7df445 not found: ID does not exist" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.037232 4763 scope.go:117] "RemoveContainer" containerID="f08a6750497f2109f855a8140586391b641ccc75c075f25e6e29d0621151a5ca" Jan 31 15:00:51 crc kubenswrapper[4763]: E0131 15:00:51.037571 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f08a6750497f2109f855a8140586391b641ccc75c075f25e6e29d0621151a5ca\": container with ID starting with f08a6750497f2109f855a8140586391b641ccc75c075f25e6e29d0621151a5ca not found: ID does not exist" containerID="f08a6750497f2109f855a8140586391b641ccc75c075f25e6e29d0621151a5ca" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.037604 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f08a6750497f2109f855a8140586391b641ccc75c075f25e6e29d0621151a5ca"} err="failed to get container status \"f08a6750497f2109f855a8140586391b641ccc75c075f25e6e29d0621151a5ca\": rpc error: code = NotFound desc = could not find container \"f08a6750497f2109f855a8140586391b641ccc75c075f25e6e29d0621151a5ca\": container with ID starting with f08a6750497f2109f855a8140586391b641ccc75c075f25e6e29d0621151a5ca not found: ID does not exist" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.037634 4763 scope.go:117] "RemoveContainer" containerID="34c7f9fccb1c26a38501bfbea4a61629615e747ea42bb481450c6e5c4a088aa5" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.054217 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c097873-7ca4-491d-86c4-31b2ab99d63d" path="/var/lib/kubelet/pods/5c097873-7ca4-491d-86c4-31b2ab99d63d/volumes" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.055241 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f3cc890-2041-4983-8501-088c40c22b77" path="/var/lib/kubelet/pods/5f3cc890-2041-4983-8501-088c40c22b77/volumes" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.056891 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc" path="/var/lib/kubelet/pods/c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc/volumes" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.060984 4763 scope.go:117] "RemoveContainer" containerID="8ba1d4757c04aeb31be5c80fe8db01bdf4d7160223329e1b7ec3fde4e61ae64b" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.068406 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k6ddv"] Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.080874 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k6ddv"] Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.082077 4763 scope.go:117] "RemoveContainer" containerID="8e4ac171d8102521507f2ca735b9c180604f975fcecb70e2eebf9141808b96cc" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.085980 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wnmmq"] Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.089730 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wnmmq"] Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.097168 4763 scope.go:117] "RemoveContainer" containerID="34c7f9fccb1c26a38501bfbea4a61629615e747ea42bb481450c6e5c4a088aa5" Jan 31 15:00:51 crc kubenswrapper[4763]: E0131 15:00:51.097687 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34c7f9fccb1c26a38501bfbea4a61629615e747ea42bb481450c6e5c4a088aa5\": container with ID starting with 34c7f9fccb1c26a38501bfbea4a61629615e747ea42bb481450c6e5c4a088aa5 not found: ID does not exist" containerID="34c7f9fccb1c26a38501bfbea4a61629615e747ea42bb481450c6e5c4a088aa5" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.097739 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34c7f9fccb1c26a38501bfbea4a61629615e747ea42bb481450c6e5c4a088aa5"} err="failed to get container status \"34c7f9fccb1c26a38501bfbea4a61629615e747ea42bb481450c6e5c4a088aa5\": rpc error: code = NotFound desc = could not find container \"34c7f9fccb1c26a38501bfbea4a61629615e747ea42bb481450c6e5c4a088aa5\": container with ID starting with 34c7f9fccb1c26a38501bfbea4a61629615e747ea42bb481450c6e5c4a088aa5 not found: ID does not exist" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.097768 4763 scope.go:117] "RemoveContainer" containerID="8ba1d4757c04aeb31be5c80fe8db01bdf4d7160223329e1b7ec3fde4e61ae64b" Jan 31 15:00:51 crc kubenswrapper[4763]: E0131 15:00:51.098115 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ba1d4757c04aeb31be5c80fe8db01bdf4d7160223329e1b7ec3fde4e61ae64b\": container with ID starting with 8ba1d4757c04aeb31be5c80fe8db01bdf4d7160223329e1b7ec3fde4e61ae64b not found: ID does not exist" containerID="8ba1d4757c04aeb31be5c80fe8db01bdf4d7160223329e1b7ec3fde4e61ae64b" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.098194 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ba1d4757c04aeb31be5c80fe8db01bdf4d7160223329e1b7ec3fde4e61ae64b"} err="failed to get container status \"8ba1d4757c04aeb31be5c80fe8db01bdf4d7160223329e1b7ec3fde4e61ae64b\": rpc error: code = NotFound desc = could not find container \"8ba1d4757c04aeb31be5c80fe8db01bdf4d7160223329e1b7ec3fde4e61ae64b\": container with ID starting with 8ba1d4757c04aeb31be5c80fe8db01bdf4d7160223329e1b7ec3fde4e61ae64b not found: ID does not exist" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.098236 4763 scope.go:117] "RemoveContainer" containerID="8e4ac171d8102521507f2ca735b9c180604f975fcecb70e2eebf9141808b96cc" Jan 31 15:00:51 crc kubenswrapper[4763]: E0131 15:00:51.098571 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e4ac171d8102521507f2ca735b9c180604f975fcecb70e2eebf9141808b96cc\": container with ID starting with 8e4ac171d8102521507f2ca735b9c180604f975fcecb70e2eebf9141808b96cc not found: ID does not exist" containerID="8e4ac171d8102521507f2ca735b9c180604f975fcecb70e2eebf9141808b96cc" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.098602 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e4ac171d8102521507f2ca735b9c180604f975fcecb70e2eebf9141808b96cc"} err="failed to get container status \"8e4ac171d8102521507f2ca735b9c180604f975fcecb70e2eebf9141808b96cc\": rpc error: code = NotFound desc = could not find container \"8e4ac171d8102521507f2ca735b9c180604f975fcecb70e2eebf9141808b96cc\": container with ID starting with 8e4ac171d8102521507f2ca735b9c180604f975fcecb70e2eebf9141808b96cc not found: ID does not exist" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.098617 4763 scope.go:117] "RemoveContainer" containerID="6979898503c3db1e52b9a9085e8ed5e42c42873bedab7efaef4db47c207a2bd1" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.113997 4763 scope.go:117] "RemoveContainer" containerID="92f6b12aae342762d6c8699d9c674ad8822a3f42d0a32e49922d1a4c6fd48807" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.139196 4763 scope.go:117] "RemoveContainer" containerID="427edc0ea2301ea88efdfc1bb087b1cc85c3e9bc24e47c82fe06ab3272a59f18" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.156533 4763 scope.go:117] "RemoveContainer" containerID="6979898503c3db1e52b9a9085e8ed5e42c42873bedab7efaef4db47c207a2bd1" Jan 31 15:00:51 crc kubenswrapper[4763]: E0131 15:00:51.157023 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6979898503c3db1e52b9a9085e8ed5e42c42873bedab7efaef4db47c207a2bd1\": container with ID starting with 6979898503c3db1e52b9a9085e8ed5e42c42873bedab7efaef4db47c207a2bd1 not found: ID does not exist" containerID="6979898503c3db1e52b9a9085e8ed5e42c42873bedab7efaef4db47c207a2bd1" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.157085 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6979898503c3db1e52b9a9085e8ed5e42c42873bedab7efaef4db47c207a2bd1"} err="failed to get container status \"6979898503c3db1e52b9a9085e8ed5e42c42873bedab7efaef4db47c207a2bd1\": rpc error: code = NotFound desc = could not find container \"6979898503c3db1e52b9a9085e8ed5e42c42873bedab7efaef4db47c207a2bd1\": container with ID starting with 6979898503c3db1e52b9a9085e8ed5e42c42873bedab7efaef4db47c207a2bd1 not found: ID does not exist" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.157127 4763 scope.go:117] "RemoveContainer" containerID="92f6b12aae342762d6c8699d9c674ad8822a3f42d0a32e49922d1a4c6fd48807" Jan 31 15:00:51 crc kubenswrapper[4763]: E0131 15:00:51.157728 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92f6b12aae342762d6c8699d9c674ad8822a3f42d0a32e49922d1a4c6fd48807\": container with ID starting with 92f6b12aae342762d6c8699d9c674ad8822a3f42d0a32e49922d1a4c6fd48807 not found: ID does not exist" containerID="92f6b12aae342762d6c8699d9c674ad8822a3f42d0a32e49922d1a4c6fd48807" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.157786 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92f6b12aae342762d6c8699d9c674ad8822a3f42d0a32e49922d1a4c6fd48807"} err="failed to get container status \"92f6b12aae342762d6c8699d9c674ad8822a3f42d0a32e49922d1a4c6fd48807\": rpc error: code = NotFound desc = could not find container \"92f6b12aae342762d6c8699d9c674ad8822a3f42d0a32e49922d1a4c6fd48807\": container with ID starting with 92f6b12aae342762d6c8699d9c674ad8822a3f42d0a32e49922d1a4c6fd48807 not found: ID does not exist" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.157816 4763 scope.go:117] "RemoveContainer" containerID="427edc0ea2301ea88efdfc1bb087b1cc85c3e9bc24e47c82fe06ab3272a59f18" Jan 31 15:00:51 crc kubenswrapper[4763]: E0131 15:00:51.158128 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"427edc0ea2301ea88efdfc1bb087b1cc85c3e9bc24e47c82fe06ab3272a59f18\": container with ID starting with 427edc0ea2301ea88efdfc1bb087b1cc85c3e9bc24e47c82fe06ab3272a59f18 not found: ID does not exist" containerID="427edc0ea2301ea88efdfc1bb087b1cc85c3e9bc24e47c82fe06ab3272a59f18" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.158149 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"427edc0ea2301ea88efdfc1bb087b1cc85c3e9bc24e47c82fe06ab3272a59f18"} err="failed to get container status \"427edc0ea2301ea88efdfc1bb087b1cc85c3e9bc24e47c82fe06ab3272a59f18\": rpc error: code = NotFound desc = could not find container \"427edc0ea2301ea88efdfc1bb087b1cc85c3e9bc24e47c82fe06ab3272a59f18\": container with ID starting with 427edc0ea2301ea88efdfc1bb087b1cc85c3e9bc24e47c82fe06ab3272a59f18 not found: ID does not exist" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.158162 4763 scope.go:117] "RemoveContainer" containerID="0711770c6696d1677cd00d3fae5f53e4c8b8978c7d7d63abe1ed9abf767c4b97" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.175593 4763 scope.go:117] "RemoveContainer" containerID="f4f94150c6f32dbe928669679c8121afb9a283014f70e46bdad6544c4f2849b1" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.190458 4763 scope.go:117] "RemoveContainer" containerID="05d8ffb53e31709980a6ae08442bf22077516244ce4226b0669c91b854ed1579" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.205628 4763 scope.go:117] "RemoveContainer" containerID="0711770c6696d1677cd00d3fae5f53e4c8b8978c7d7d63abe1ed9abf767c4b97" Jan 31 15:00:51 crc kubenswrapper[4763]: E0131 15:00:51.206527 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0711770c6696d1677cd00d3fae5f53e4c8b8978c7d7d63abe1ed9abf767c4b97\": container with ID starting with 0711770c6696d1677cd00d3fae5f53e4c8b8978c7d7d63abe1ed9abf767c4b97 not found: ID does not exist" containerID="0711770c6696d1677cd00d3fae5f53e4c8b8978c7d7d63abe1ed9abf767c4b97" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.206566 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0711770c6696d1677cd00d3fae5f53e4c8b8978c7d7d63abe1ed9abf767c4b97"} err="failed to get container status \"0711770c6696d1677cd00d3fae5f53e4c8b8978c7d7d63abe1ed9abf767c4b97\": rpc error: code = NotFound desc = could not find container \"0711770c6696d1677cd00d3fae5f53e4c8b8978c7d7d63abe1ed9abf767c4b97\": container with ID starting with 0711770c6696d1677cd00d3fae5f53e4c8b8978c7d7d63abe1ed9abf767c4b97 not found: ID does not exist" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.206605 4763 scope.go:117] "RemoveContainer" containerID="f4f94150c6f32dbe928669679c8121afb9a283014f70e46bdad6544c4f2849b1" Jan 31 15:00:51 crc kubenswrapper[4763]: E0131 15:00:51.206982 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4f94150c6f32dbe928669679c8121afb9a283014f70e46bdad6544c4f2849b1\": container with ID starting with f4f94150c6f32dbe928669679c8121afb9a283014f70e46bdad6544c4f2849b1 not found: ID does not exist" containerID="f4f94150c6f32dbe928669679c8121afb9a283014f70e46bdad6544c4f2849b1" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.207056 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4f94150c6f32dbe928669679c8121afb9a283014f70e46bdad6544c4f2849b1"} err="failed to get container status \"f4f94150c6f32dbe928669679c8121afb9a283014f70e46bdad6544c4f2849b1\": rpc error: code = NotFound desc = could not find container \"f4f94150c6f32dbe928669679c8121afb9a283014f70e46bdad6544c4f2849b1\": container with ID starting with f4f94150c6f32dbe928669679c8121afb9a283014f70e46bdad6544c4f2849b1 not found: ID does not exist" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.207102 4763 scope.go:117] "RemoveContainer" containerID="05d8ffb53e31709980a6ae08442bf22077516244ce4226b0669c91b854ed1579" Jan 31 15:00:51 crc kubenswrapper[4763]: E0131 15:00:51.207470 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05d8ffb53e31709980a6ae08442bf22077516244ce4226b0669c91b854ed1579\": container with ID starting with 05d8ffb53e31709980a6ae08442bf22077516244ce4226b0669c91b854ed1579 not found: ID does not exist" containerID="05d8ffb53e31709980a6ae08442bf22077516244ce4226b0669c91b854ed1579" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.207505 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05d8ffb53e31709980a6ae08442bf22077516244ce4226b0669c91b854ed1579"} err="failed to get container status \"05d8ffb53e31709980a6ae08442bf22077516244ce4226b0669c91b854ed1579\": rpc error: code = NotFound desc = could not find container \"05d8ffb53e31709980a6ae08442bf22077516244ce4226b0669c91b854ed1579\": container with ID starting with 05d8ffb53e31709980a6ae08442bf22077516244ce4226b0669c91b854ed1579 not found: ID does not exist" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.207531 4763 scope.go:117] "RemoveContainer" containerID="b11296a17e50e5a5d66eaf92faf74300be0a5405e0878ffb5ba19adff4be04ed" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.239846 4763 scope.go:117] "RemoveContainer" containerID="b8d4bf7575d1a0a38daa94164fa783e10d0f4c3b29cd1769b324fe728df81239" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.263160 4763 scope.go:117] "RemoveContainer" containerID="f819b8c865363bd5ff66e2069c6d5174ad8675d68262ee17b08a94b17937547d" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.291771 4763 scope.go:117] "RemoveContainer" containerID="b11296a17e50e5a5d66eaf92faf74300be0a5405e0878ffb5ba19adff4be04ed" Jan 31 15:00:51 crc kubenswrapper[4763]: E0131 15:00:51.292091 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b11296a17e50e5a5d66eaf92faf74300be0a5405e0878ffb5ba19adff4be04ed\": container with ID starting with b11296a17e50e5a5d66eaf92faf74300be0a5405e0878ffb5ba19adff4be04ed not found: ID does not exist" containerID="b11296a17e50e5a5d66eaf92faf74300be0a5405e0878ffb5ba19adff4be04ed" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.292145 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b11296a17e50e5a5d66eaf92faf74300be0a5405e0878ffb5ba19adff4be04ed"} err="failed to get container status \"b11296a17e50e5a5d66eaf92faf74300be0a5405e0878ffb5ba19adff4be04ed\": rpc error: code = NotFound desc = could not find container \"b11296a17e50e5a5d66eaf92faf74300be0a5405e0878ffb5ba19adff4be04ed\": container with ID starting with b11296a17e50e5a5d66eaf92faf74300be0a5405e0878ffb5ba19adff4be04ed not found: ID does not exist" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.292183 4763 scope.go:117] "RemoveContainer" containerID="b8d4bf7575d1a0a38daa94164fa783e10d0f4c3b29cd1769b324fe728df81239" Jan 31 15:00:51 crc kubenswrapper[4763]: E0131 15:00:51.292533 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8d4bf7575d1a0a38daa94164fa783e10d0f4c3b29cd1769b324fe728df81239\": container with ID starting with b8d4bf7575d1a0a38daa94164fa783e10d0f4c3b29cd1769b324fe728df81239 not found: ID does not exist" containerID="b8d4bf7575d1a0a38daa94164fa783e10d0f4c3b29cd1769b324fe728df81239" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.292594 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8d4bf7575d1a0a38daa94164fa783e10d0f4c3b29cd1769b324fe728df81239"} err="failed to get container status \"b8d4bf7575d1a0a38daa94164fa783e10d0f4c3b29cd1769b324fe728df81239\": rpc error: code = NotFound desc = could not find container \"b8d4bf7575d1a0a38daa94164fa783e10d0f4c3b29cd1769b324fe728df81239\": container with ID starting with b8d4bf7575d1a0a38daa94164fa783e10d0f4c3b29cd1769b324fe728df81239 not found: ID does not exist" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.292634 4763 scope.go:117] "RemoveContainer" containerID="f819b8c865363bd5ff66e2069c6d5174ad8675d68262ee17b08a94b17937547d" Jan 31 15:00:51 crc kubenswrapper[4763]: E0131 15:00:51.292909 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f819b8c865363bd5ff66e2069c6d5174ad8675d68262ee17b08a94b17937547d\": container with ID starting with f819b8c865363bd5ff66e2069c6d5174ad8675d68262ee17b08a94b17937547d not found: ID does not exist" containerID="f819b8c865363bd5ff66e2069c6d5174ad8675d68262ee17b08a94b17937547d" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.292940 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f819b8c865363bd5ff66e2069c6d5174ad8675d68262ee17b08a94b17937547d"} err="failed to get container status \"f819b8c865363bd5ff66e2069c6d5174ad8675d68262ee17b08a94b17937547d\": rpc error: code = NotFound desc = could not find container \"f819b8c865363bd5ff66e2069c6d5174ad8675d68262ee17b08a94b17937547d\": container with ID starting with f819b8c865363bd5ff66e2069c6d5174ad8675d68262ee17b08a94b17937547d not found: ID does not exist" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.016208 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-gg2dq" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.020402 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-gg2dq" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.068721 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-znznv"] Jan 31 15:00:52 crc kubenswrapper[4763]: E0131 15:00:52.068983 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c097873-7ca4-491d-86c4-31b2ab99d63d" containerName="extract-utilities" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.068999 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c097873-7ca4-491d-86c4-31b2ab99d63d" containerName="extract-utilities" Jan 31 15:00:52 crc kubenswrapper[4763]: E0131 15:00:52.069018 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2434f0b9-846a-444c-b487-745d4010002b" containerName="extract-utilities" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.069025 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="2434f0b9-846a-444c-b487-745d4010002b" containerName="extract-utilities" Jan 31 15:00:52 crc kubenswrapper[4763]: E0131 15:00:52.069036 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8a35a73-67a0-4bb4-9954-46350d31b017" containerName="extract-utilities" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.069044 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8a35a73-67a0-4bb4-9954-46350d31b017" containerName="extract-utilities" Jan 31 15:00:52 crc kubenswrapper[4763]: E0131 15:00:52.069056 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f3cc890-2041-4983-8501-088c40c22b77" containerName="extract-content" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.069064 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f3cc890-2041-4983-8501-088c40c22b77" containerName="extract-content" Jan 31 15:00:52 crc kubenswrapper[4763]: E0131 15:00:52.069077 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c097873-7ca4-491d-86c4-31b2ab99d63d" containerName="extract-content" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.069084 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c097873-7ca4-491d-86c4-31b2ab99d63d" containerName="extract-content" Jan 31 15:00:52 crc kubenswrapper[4763]: E0131 15:00:52.069095 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c097873-7ca4-491d-86c4-31b2ab99d63d" containerName="registry-server" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.069102 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c097873-7ca4-491d-86c4-31b2ab99d63d" containerName="registry-server" Jan 31 15:00:52 crc kubenswrapper[4763]: E0131 15:00:52.069113 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc" containerName="marketplace-operator" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.069121 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc" containerName="marketplace-operator" Jan 31 15:00:52 crc kubenswrapper[4763]: E0131 15:00:52.069130 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8a35a73-67a0-4bb4-9954-46350d31b017" containerName="registry-server" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.069138 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8a35a73-67a0-4bb4-9954-46350d31b017" containerName="registry-server" Jan 31 15:00:52 crc kubenswrapper[4763]: E0131 15:00:52.069145 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2434f0b9-846a-444c-b487-745d4010002b" containerName="extract-content" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.069153 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="2434f0b9-846a-444c-b487-745d4010002b" containerName="extract-content" Jan 31 15:00:52 crc kubenswrapper[4763]: E0131 15:00:52.069163 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc" containerName="marketplace-operator" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.069170 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc" containerName="marketplace-operator" Jan 31 15:00:52 crc kubenswrapper[4763]: E0131 15:00:52.069180 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8a35a73-67a0-4bb4-9954-46350d31b017" containerName="extract-content" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.069187 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8a35a73-67a0-4bb4-9954-46350d31b017" containerName="extract-content" Jan 31 15:00:52 crc kubenswrapper[4763]: E0131 15:00:52.069199 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f3cc890-2041-4983-8501-088c40c22b77" containerName="extract-utilities" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.069206 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f3cc890-2041-4983-8501-088c40c22b77" containerName="extract-utilities" Jan 31 15:00:52 crc kubenswrapper[4763]: E0131 15:00:52.069217 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f3cc890-2041-4983-8501-088c40c22b77" containerName="registry-server" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.069224 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f3cc890-2041-4983-8501-088c40c22b77" containerName="registry-server" Jan 31 15:00:52 crc kubenswrapper[4763]: E0131 15:00:52.069233 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2434f0b9-846a-444c-b487-745d4010002b" containerName="registry-server" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.069241 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="2434f0b9-846a-444c-b487-745d4010002b" containerName="registry-server" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.069363 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="2434f0b9-846a-444c-b487-745d4010002b" containerName="registry-server" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.069455 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f3cc890-2041-4983-8501-088c40c22b77" containerName="registry-server" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.069480 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc" containerName="marketplace-operator" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.069493 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc" containerName="marketplace-operator" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.069508 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c097873-7ca4-491d-86c4-31b2ab99d63d" containerName="registry-server" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.069520 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8a35a73-67a0-4bb4-9954-46350d31b017" containerName="registry-server" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.070389 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-znznv" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.074284 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.086127 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-znznv"] Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.231046 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48hzt\" (UniqueName: \"kubernetes.io/projected/d877abcd-9d8f-4597-b41c-4026d954cc62-kube-api-access-48hzt\") pod \"redhat-marketplace-znznv\" (UID: \"d877abcd-9d8f-4597-b41c-4026d954cc62\") " pod="openshift-marketplace/redhat-marketplace-znznv" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.231186 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d877abcd-9d8f-4597-b41c-4026d954cc62-utilities\") pod \"redhat-marketplace-znznv\" (UID: \"d877abcd-9d8f-4597-b41c-4026d954cc62\") " pod="openshift-marketplace/redhat-marketplace-znznv" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.231242 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d877abcd-9d8f-4597-b41c-4026d954cc62-catalog-content\") pod \"redhat-marketplace-znznv\" (UID: \"d877abcd-9d8f-4597-b41c-4026d954cc62\") " pod="openshift-marketplace/redhat-marketplace-znznv" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.259798 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gshr8"] Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.261165 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gshr8" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.263817 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.292343 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gshr8"] Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.332520 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/751420d5-1809-406a-bef8-8e4015d9763b-utilities\") pod \"redhat-operators-gshr8\" (UID: \"751420d5-1809-406a-bef8-8e4015d9763b\") " pod="openshift-marketplace/redhat-operators-gshr8" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.332915 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48hzt\" (UniqueName: \"kubernetes.io/projected/d877abcd-9d8f-4597-b41c-4026d954cc62-kube-api-access-48hzt\") pod \"redhat-marketplace-znznv\" (UID: \"d877abcd-9d8f-4597-b41c-4026d954cc62\") " pod="openshift-marketplace/redhat-marketplace-znznv" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.333038 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/751420d5-1809-406a-bef8-8e4015d9763b-catalog-content\") pod \"redhat-operators-gshr8\" (UID: \"751420d5-1809-406a-bef8-8e4015d9763b\") " pod="openshift-marketplace/redhat-operators-gshr8" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.333136 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d877abcd-9d8f-4597-b41c-4026d954cc62-utilities\") pod \"redhat-marketplace-znznv\" (UID: \"d877abcd-9d8f-4597-b41c-4026d954cc62\") " pod="openshift-marketplace/redhat-marketplace-znznv" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.333238 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvtnf\" (UniqueName: \"kubernetes.io/projected/751420d5-1809-406a-bef8-8e4015d9763b-kube-api-access-qvtnf\") pod \"redhat-operators-gshr8\" (UID: \"751420d5-1809-406a-bef8-8e4015d9763b\") " pod="openshift-marketplace/redhat-operators-gshr8" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.333347 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d877abcd-9d8f-4597-b41c-4026d954cc62-catalog-content\") pod \"redhat-marketplace-znznv\" (UID: \"d877abcd-9d8f-4597-b41c-4026d954cc62\") " pod="openshift-marketplace/redhat-marketplace-znznv" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.333621 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d877abcd-9d8f-4597-b41c-4026d954cc62-utilities\") pod \"redhat-marketplace-znznv\" (UID: \"d877abcd-9d8f-4597-b41c-4026d954cc62\") " pod="openshift-marketplace/redhat-marketplace-znznv" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.333657 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d877abcd-9d8f-4597-b41c-4026d954cc62-catalog-content\") pod \"redhat-marketplace-znznv\" (UID: \"d877abcd-9d8f-4597-b41c-4026d954cc62\") " pod="openshift-marketplace/redhat-marketplace-znznv" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.358375 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48hzt\" (UniqueName: \"kubernetes.io/projected/d877abcd-9d8f-4597-b41c-4026d954cc62-kube-api-access-48hzt\") pod \"redhat-marketplace-znznv\" (UID: \"d877abcd-9d8f-4597-b41c-4026d954cc62\") " pod="openshift-marketplace/redhat-marketplace-znznv" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.416527 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-znznv" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.434343 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvtnf\" (UniqueName: \"kubernetes.io/projected/751420d5-1809-406a-bef8-8e4015d9763b-kube-api-access-qvtnf\") pod \"redhat-operators-gshr8\" (UID: \"751420d5-1809-406a-bef8-8e4015d9763b\") " pod="openshift-marketplace/redhat-operators-gshr8" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.434515 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/751420d5-1809-406a-bef8-8e4015d9763b-utilities\") pod \"redhat-operators-gshr8\" (UID: \"751420d5-1809-406a-bef8-8e4015d9763b\") " pod="openshift-marketplace/redhat-operators-gshr8" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.434634 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/751420d5-1809-406a-bef8-8e4015d9763b-catalog-content\") pod \"redhat-operators-gshr8\" (UID: \"751420d5-1809-406a-bef8-8e4015d9763b\") " pod="openshift-marketplace/redhat-operators-gshr8" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.435099 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/751420d5-1809-406a-bef8-8e4015d9763b-utilities\") pod \"redhat-operators-gshr8\" (UID: \"751420d5-1809-406a-bef8-8e4015d9763b\") " pod="openshift-marketplace/redhat-operators-gshr8" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.435226 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/751420d5-1809-406a-bef8-8e4015d9763b-catalog-content\") pod \"redhat-operators-gshr8\" (UID: \"751420d5-1809-406a-bef8-8e4015d9763b\") " pod="openshift-marketplace/redhat-operators-gshr8" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.458146 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvtnf\" (UniqueName: \"kubernetes.io/projected/751420d5-1809-406a-bef8-8e4015d9763b-kube-api-access-qvtnf\") pod \"redhat-operators-gshr8\" (UID: \"751420d5-1809-406a-bef8-8e4015d9763b\") " pod="openshift-marketplace/redhat-operators-gshr8" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.580129 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gshr8" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.650186 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-znznv"] Jan 31 15:00:52 crc kubenswrapper[4763]: W0131 15:00:52.658481 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd877abcd_9d8f_4597_b41c_4026d954cc62.slice/crio-3b2a5091f963da8fc1e5440cf7010aa5220b66eef122ef1654b140b520ddeff1 WatchSource:0}: Error finding container 3b2a5091f963da8fc1e5440cf7010aa5220b66eef122ef1654b140b520ddeff1: Status 404 returned error can't find the container with id 3b2a5091f963da8fc1e5440cf7010aa5220b66eef122ef1654b140b520ddeff1 Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.781486 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gshr8"] Jan 31 15:00:52 crc kubenswrapper[4763]: W0131 15:00:52.800926 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod751420d5_1809_406a_bef8_8e4015d9763b.slice/crio-7cb99f8b81c61afa0cf480506b6a47630b9cfbdd85e3616597b786216fca87e1 WatchSource:0}: Error finding container 7cb99f8b81c61afa0cf480506b6a47630b9cfbdd85e3616597b786216fca87e1: Status 404 returned error can't find the container with id 7cb99f8b81c61afa0cf480506b6a47630b9cfbdd85e3616597b786216fca87e1 Jan 31 15:00:53 crc kubenswrapper[4763]: I0131 15:00:53.022105 4763 generic.go:334] "Generic (PLEG): container finished" podID="751420d5-1809-406a-bef8-8e4015d9763b" containerID="ddcaf533e632566e5c4269beae815a92218a430b3b9db331c48179a6b13b0cd9" exitCode=0 Jan 31 15:00:53 crc kubenswrapper[4763]: I0131 15:00:53.022357 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gshr8" event={"ID":"751420d5-1809-406a-bef8-8e4015d9763b","Type":"ContainerDied","Data":"ddcaf533e632566e5c4269beae815a92218a430b3b9db331c48179a6b13b0cd9"} Jan 31 15:00:53 crc kubenswrapper[4763]: I0131 15:00:53.023922 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gshr8" event={"ID":"751420d5-1809-406a-bef8-8e4015d9763b","Type":"ContainerStarted","Data":"7cb99f8b81c61afa0cf480506b6a47630b9cfbdd85e3616597b786216fca87e1"} Jan 31 15:00:53 crc kubenswrapper[4763]: I0131 15:00:53.025993 4763 generic.go:334] "Generic (PLEG): container finished" podID="d877abcd-9d8f-4597-b41c-4026d954cc62" containerID="c117fec153a7942bbf26ced795ff3a5bad0a2f3b3312d724c2164f2f2a60e711" exitCode=0 Jan 31 15:00:53 crc kubenswrapper[4763]: I0131 15:00:53.026208 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znznv" event={"ID":"d877abcd-9d8f-4597-b41c-4026d954cc62","Type":"ContainerDied","Data":"c117fec153a7942bbf26ced795ff3a5bad0a2f3b3312d724c2164f2f2a60e711"} Jan 31 15:00:53 crc kubenswrapper[4763]: I0131 15:00:53.026859 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znznv" event={"ID":"d877abcd-9d8f-4597-b41c-4026d954cc62","Type":"ContainerStarted","Data":"3b2a5091f963da8fc1e5440cf7010aa5220b66eef122ef1654b140b520ddeff1"} Jan 31 15:00:53 crc kubenswrapper[4763]: I0131 15:00:53.054515 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2434f0b9-846a-444c-b487-745d4010002b" path="/var/lib/kubelet/pods/2434f0b9-846a-444c-b487-745d4010002b/volumes" Jan 31 15:00:53 crc kubenswrapper[4763]: I0131 15:00:53.056069 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8a35a73-67a0-4bb4-9954-46350d31b017" path="/var/lib/kubelet/pods/b8a35a73-67a0-4bb4-9954-46350d31b017/volumes" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.035145 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gshr8" event={"ID":"751420d5-1809-406a-bef8-8e4015d9763b","Type":"ContainerStarted","Data":"bf763c3e12e9511d327e71d4e25b967bef5da397d1f97369da781f44007446f6"} Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.038198 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znznv" event={"ID":"d877abcd-9d8f-4597-b41c-4026d954cc62","Type":"ContainerStarted","Data":"1553f6bb3bbf0575af47634359e53323fc49b2bbe8d6197bb975720bff6376b1"} Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.462242 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-frpn9"] Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.464130 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-frpn9" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.466448 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.479449 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-frpn9"] Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.565477 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz7h8\" (UniqueName: \"kubernetes.io/projected/5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3-kube-api-access-kz7h8\") pod \"certified-operators-frpn9\" (UID: \"5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3\") " pod="openshift-marketplace/certified-operators-frpn9" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.565574 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3-utilities\") pod \"certified-operators-frpn9\" (UID: \"5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3\") " pod="openshift-marketplace/certified-operators-frpn9" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.565789 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3-catalog-content\") pod \"certified-operators-frpn9\" (UID: \"5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3\") " pod="openshift-marketplace/certified-operators-frpn9" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.660432 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v59tf"] Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.661579 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v59tf" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.663721 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.666612 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3-utilities\") pod \"certified-operators-frpn9\" (UID: \"5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3\") " pod="openshift-marketplace/certified-operators-frpn9" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.666768 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3-catalog-content\") pod \"certified-operators-frpn9\" (UID: \"5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3\") " pod="openshift-marketplace/certified-operators-frpn9" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.666804 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz7h8\" (UniqueName: \"kubernetes.io/projected/5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3-kube-api-access-kz7h8\") pod \"certified-operators-frpn9\" (UID: \"5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3\") " pod="openshift-marketplace/certified-operators-frpn9" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.667350 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3-catalog-content\") pod \"certified-operators-frpn9\" (UID: \"5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3\") " pod="openshift-marketplace/certified-operators-frpn9" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.667516 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3-utilities\") pod \"certified-operators-frpn9\" (UID: \"5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3\") " pod="openshift-marketplace/certified-operators-frpn9" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.670191 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v59tf"] Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.733501 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz7h8\" (UniqueName: \"kubernetes.io/projected/5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3-kube-api-access-kz7h8\") pod \"certified-operators-frpn9\" (UID: \"5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3\") " pod="openshift-marketplace/certified-operators-frpn9" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.768573 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6pkz\" (UniqueName: \"kubernetes.io/projected/e2f6ea13-f993-4138-b5d5-a549e9aae21b-kube-api-access-q6pkz\") pod \"community-operators-v59tf\" (UID: \"e2f6ea13-f993-4138-b5d5-a549e9aae21b\") " pod="openshift-marketplace/community-operators-v59tf" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.768633 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2f6ea13-f993-4138-b5d5-a549e9aae21b-catalog-content\") pod \"community-operators-v59tf\" (UID: \"e2f6ea13-f993-4138-b5d5-a549e9aae21b\") " pod="openshift-marketplace/community-operators-v59tf" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.768668 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2f6ea13-f993-4138-b5d5-a549e9aae21b-utilities\") pod \"community-operators-v59tf\" (UID: \"e2f6ea13-f993-4138-b5d5-a549e9aae21b\") " pod="openshift-marketplace/community-operators-v59tf" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.790594 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-frpn9" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.870222 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6pkz\" (UniqueName: \"kubernetes.io/projected/e2f6ea13-f993-4138-b5d5-a549e9aae21b-kube-api-access-q6pkz\") pod \"community-operators-v59tf\" (UID: \"e2f6ea13-f993-4138-b5d5-a549e9aae21b\") " pod="openshift-marketplace/community-operators-v59tf" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.870712 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2f6ea13-f993-4138-b5d5-a549e9aae21b-catalog-content\") pod \"community-operators-v59tf\" (UID: \"e2f6ea13-f993-4138-b5d5-a549e9aae21b\") " pod="openshift-marketplace/community-operators-v59tf" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.870737 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2f6ea13-f993-4138-b5d5-a549e9aae21b-utilities\") pod \"community-operators-v59tf\" (UID: \"e2f6ea13-f993-4138-b5d5-a549e9aae21b\") " pod="openshift-marketplace/community-operators-v59tf" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.872089 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2f6ea13-f993-4138-b5d5-a549e9aae21b-catalog-content\") pod \"community-operators-v59tf\" (UID: \"e2f6ea13-f993-4138-b5d5-a549e9aae21b\") " pod="openshift-marketplace/community-operators-v59tf" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.872397 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2f6ea13-f993-4138-b5d5-a549e9aae21b-utilities\") pod \"community-operators-v59tf\" (UID: \"e2f6ea13-f993-4138-b5d5-a549e9aae21b\") " pod="openshift-marketplace/community-operators-v59tf" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.895626 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6pkz\" (UniqueName: \"kubernetes.io/projected/e2f6ea13-f993-4138-b5d5-a549e9aae21b-kube-api-access-q6pkz\") pod \"community-operators-v59tf\" (UID: \"e2f6ea13-f993-4138-b5d5-a549e9aae21b\") " pod="openshift-marketplace/community-operators-v59tf" Jan 31 15:00:55 crc kubenswrapper[4763]: I0131 15:00:55.003813 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-frpn9"] Jan 31 15:00:55 crc kubenswrapper[4763]: I0131 15:00:55.049064 4763 generic.go:334] "Generic (PLEG): container finished" podID="751420d5-1809-406a-bef8-8e4015d9763b" containerID="bf763c3e12e9511d327e71d4e25b967bef5da397d1f97369da781f44007446f6" exitCode=0 Jan 31 15:00:55 crc kubenswrapper[4763]: I0131 15:00:55.056551 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gshr8" event={"ID":"751420d5-1809-406a-bef8-8e4015d9763b","Type":"ContainerDied","Data":"bf763c3e12e9511d327e71d4e25b967bef5da397d1f97369da781f44007446f6"} Jan 31 15:00:55 crc kubenswrapper[4763]: I0131 15:00:55.060382 4763 generic.go:334] "Generic (PLEG): container finished" podID="d877abcd-9d8f-4597-b41c-4026d954cc62" containerID="1553f6bb3bbf0575af47634359e53323fc49b2bbe8d6197bb975720bff6376b1" exitCode=0 Jan 31 15:00:55 crc kubenswrapper[4763]: I0131 15:00:55.060462 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znznv" event={"ID":"d877abcd-9d8f-4597-b41c-4026d954cc62","Type":"ContainerDied","Data":"1553f6bb3bbf0575af47634359e53323fc49b2bbe8d6197bb975720bff6376b1"} Jan 31 15:00:55 crc kubenswrapper[4763]: I0131 15:00:55.063839 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-frpn9" event={"ID":"5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3","Type":"ContainerStarted","Data":"692f822dcf6f33c7e4f1099ee2bd24b697bb0e3c08db689dabfdb8feb4b46a43"} Jan 31 15:00:55 crc kubenswrapper[4763]: I0131 15:00:55.079178 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v59tf" Jan 31 15:00:55 crc kubenswrapper[4763]: I0131 15:00:55.240267 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v59tf"] Jan 31 15:00:55 crc kubenswrapper[4763]: W0131 15:00:55.243435 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2f6ea13_f993_4138_b5d5_a549e9aae21b.slice/crio-e68a120ea9937e6c9fc25705105b6768a0da7188b3c8199ec831cf1dfc15e225 WatchSource:0}: Error finding container e68a120ea9937e6c9fc25705105b6768a0da7188b3c8199ec831cf1dfc15e225: Status 404 returned error can't find the container with id e68a120ea9937e6c9fc25705105b6768a0da7188b3c8199ec831cf1dfc15e225 Jan 31 15:00:56 crc kubenswrapper[4763]: I0131 15:00:56.071624 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gshr8" event={"ID":"751420d5-1809-406a-bef8-8e4015d9763b","Type":"ContainerStarted","Data":"948be45851843087686767c5b56e2655ccdf6c3a5bdb938c8d82cc9d38346b16"} Jan 31 15:00:56 crc kubenswrapper[4763]: I0131 15:00:56.073843 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znznv" event={"ID":"d877abcd-9d8f-4597-b41c-4026d954cc62","Type":"ContainerStarted","Data":"8330978aa0524b41cae166dfabefdd6921a78a08c86af0bcad387d3f5b44e71b"} Jan 31 15:00:56 crc kubenswrapper[4763]: I0131 15:00:56.075637 4763 generic.go:334] "Generic (PLEG): container finished" podID="5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3" containerID="c4a0f5cb76157a8898af1357d6f56134d4c3e3555d17bcd012e8ca76746b4a8f" exitCode=0 Jan 31 15:00:56 crc kubenswrapper[4763]: I0131 15:00:56.075723 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-frpn9" event={"ID":"5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3","Type":"ContainerDied","Data":"c4a0f5cb76157a8898af1357d6f56134d4c3e3555d17bcd012e8ca76746b4a8f"} Jan 31 15:00:56 crc kubenswrapper[4763]: I0131 15:00:56.080173 4763 generic.go:334] "Generic (PLEG): container finished" podID="e2f6ea13-f993-4138-b5d5-a549e9aae21b" containerID="8044f0e198e2d99c1aaa9156f169d2b8901e85bd0917010f8c419488c46c24dc" exitCode=0 Jan 31 15:00:56 crc kubenswrapper[4763]: I0131 15:00:56.080305 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v59tf" event={"ID":"e2f6ea13-f993-4138-b5d5-a549e9aae21b","Type":"ContainerDied","Data":"8044f0e198e2d99c1aaa9156f169d2b8901e85bd0917010f8c419488c46c24dc"} Jan 31 15:00:56 crc kubenswrapper[4763]: I0131 15:00:56.080410 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v59tf" event={"ID":"e2f6ea13-f993-4138-b5d5-a549e9aae21b","Type":"ContainerStarted","Data":"e68a120ea9937e6c9fc25705105b6768a0da7188b3c8199ec831cf1dfc15e225"} Jan 31 15:00:56 crc kubenswrapper[4763]: I0131 15:00:56.092511 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gshr8" podStartSLOduration=1.673847795 podStartE2EDuration="4.09249322s" podCreationTimestamp="2026-01-31 15:00:52 +0000 UTC" firstStartedPulling="2026-01-31 15:00:53.024013242 +0000 UTC m=+372.778751525" lastFinishedPulling="2026-01-31 15:00:55.442658657 +0000 UTC m=+375.197396950" observedRunningTime="2026-01-31 15:00:56.089412745 +0000 UTC m=+375.844151038" watchObservedRunningTime="2026-01-31 15:00:56.09249322 +0000 UTC m=+375.847231523" Jan 31 15:00:56 crc kubenswrapper[4763]: I0131 15:00:56.103944 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-znznv" podStartSLOduration=1.628491288 podStartE2EDuration="4.103925556s" podCreationTimestamp="2026-01-31 15:00:52 +0000 UTC" firstStartedPulling="2026-01-31 15:00:53.028103435 +0000 UTC m=+372.782841728" lastFinishedPulling="2026-01-31 15:00:55.503537693 +0000 UTC m=+375.258275996" observedRunningTime="2026-01-31 15:00:56.102068075 +0000 UTC m=+375.856806368" watchObservedRunningTime="2026-01-31 15:00:56.103925556 +0000 UTC m=+375.858663839" Jan 31 15:00:57 crc kubenswrapper[4763]: I0131 15:00:57.087091 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-frpn9" event={"ID":"5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3","Type":"ContainerStarted","Data":"0efb032f5cfdb116df53fa2b4c6faf783f8ef21eb09b90eb902e145307ef4cbd"} Jan 31 15:00:57 crc kubenswrapper[4763]: I0131 15:00:57.089678 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v59tf" event={"ID":"e2f6ea13-f993-4138-b5d5-a549e9aae21b","Type":"ContainerStarted","Data":"49f16f1a31ca93e24b5688517baec17736c9cde4ac9ee4d45b3fd5b8775345c2"} Jan 31 15:00:58 crc kubenswrapper[4763]: I0131 15:00:58.096448 4763 generic.go:334] "Generic (PLEG): container finished" podID="e2f6ea13-f993-4138-b5d5-a549e9aae21b" containerID="49f16f1a31ca93e24b5688517baec17736c9cde4ac9ee4d45b3fd5b8775345c2" exitCode=0 Jan 31 15:00:58 crc kubenswrapper[4763]: I0131 15:00:58.096483 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v59tf" event={"ID":"e2f6ea13-f993-4138-b5d5-a549e9aae21b","Type":"ContainerDied","Data":"49f16f1a31ca93e24b5688517baec17736c9cde4ac9ee4d45b3fd5b8775345c2"} Jan 31 15:00:58 crc kubenswrapper[4763]: I0131 15:00:58.099944 4763 generic.go:334] "Generic (PLEG): container finished" podID="5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3" containerID="0efb032f5cfdb116df53fa2b4c6faf783f8ef21eb09b90eb902e145307ef4cbd" exitCode=0 Jan 31 15:00:58 crc kubenswrapper[4763]: I0131 15:00:58.100052 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-frpn9" event={"ID":"5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3","Type":"ContainerDied","Data":"0efb032f5cfdb116df53fa2b4c6faf783f8ef21eb09b90eb902e145307ef4cbd"} Jan 31 15:00:59 crc kubenswrapper[4763]: I0131 15:00:59.106427 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v59tf" event={"ID":"e2f6ea13-f993-4138-b5d5-a549e9aae21b","Type":"ContainerStarted","Data":"e0aaf7ff1cd67de7251956d8a7e13c080b0fe0d719d8e0ff2adf68d5f685453a"} Jan 31 15:00:59 crc kubenswrapper[4763]: I0131 15:00:59.108642 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-frpn9" event={"ID":"5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3","Type":"ContainerStarted","Data":"bb58dc372d93d6bccd825a875f095687a58a7ff096b90a26ce6224713fc42821"} Jan 31 15:00:59 crc kubenswrapper[4763]: I0131 15:00:59.132071 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v59tf" podStartSLOduration=2.628991954 podStartE2EDuration="5.132051287s" podCreationTimestamp="2026-01-31 15:00:54 +0000 UTC" firstStartedPulling="2026-01-31 15:00:56.08199829 +0000 UTC m=+375.836736583" lastFinishedPulling="2026-01-31 15:00:58.585057623 +0000 UTC m=+378.339795916" observedRunningTime="2026-01-31 15:00:59.128515548 +0000 UTC m=+378.883253841" watchObservedRunningTime="2026-01-31 15:00:59.132051287 +0000 UTC m=+378.886789580" Jan 31 15:00:59 crc kubenswrapper[4763]: I0131 15:00:59.147864 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-frpn9" podStartSLOduration=2.707175099 podStartE2EDuration="5.147846394s" podCreationTimestamp="2026-01-31 15:00:54 +0000 UTC" firstStartedPulling="2026-01-31 15:00:56.077966708 +0000 UTC m=+375.832705001" lastFinishedPulling="2026-01-31 15:00:58.518638003 +0000 UTC m=+378.273376296" observedRunningTime="2026-01-31 15:00:59.144046339 +0000 UTC m=+378.898784632" watchObservedRunningTime="2026-01-31 15:00:59.147846394 +0000 UTC m=+378.902584687" Jan 31 15:01:02 crc kubenswrapper[4763]: I0131 15:01:02.089234 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:01:02 crc kubenswrapper[4763]: I0131 15:01:02.158602 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dzr7c"] Jan 31 15:01:02 crc kubenswrapper[4763]: I0131 15:01:02.416781 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-znznv" Jan 31 15:01:02 crc kubenswrapper[4763]: I0131 15:01:02.416921 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-znznv" Jan 31 15:01:02 crc kubenswrapper[4763]: I0131 15:01:02.469258 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-znznv" Jan 31 15:01:02 crc kubenswrapper[4763]: I0131 15:01:02.580592 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gshr8" Jan 31 15:01:02 crc kubenswrapper[4763]: I0131 15:01:02.580859 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gshr8" Jan 31 15:01:02 crc kubenswrapper[4763]: I0131 15:01:02.638729 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gshr8" Jan 31 15:01:03 crc kubenswrapper[4763]: I0131 15:01:03.174605 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gshr8" Jan 31 15:01:03 crc kubenswrapper[4763]: I0131 15:01:03.201885 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-znznv" Jan 31 15:01:04 crc kubenswrapper[4763]: I0131 15:01:04.791438 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-frpn9" Jan 31 15:01:04 crc kubenswrapper[4763]: I0131 15:01:04.791756 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-frpn9" Jan 31 15:01:04 crc kubenswrapper[4763]: I0131 15:01:04.835339 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-frpn9" Jan 31 15:01:05 crc kubenswrapper[4763]: I0131 15:01:05.080399 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v59tf" Jan 31 15:01:05 crc kubenswrapper[4763]: I0131 15:01:05.080860 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v59tf" Jan 31 15:01:05 crc kubenswrapper[4763]: I0131 15:01:05.117336 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v59tf" Jan 31 15:01:05 crc kubenswrapper[4763]: I0131 15:01:05.185954 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v59tf" Jan 31 15:01:05 crc kubenswrapper[4763]: I0131 15:01:05.187253 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-frpn9" Jan 31 15:01:14 crc kubenswrapper[4763]: I0131 15:01:14.178220 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:01:14 crc kubenswrapper[4763]: I0131 15:01:14.179271 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.193021 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" podUID="329bb364-3958-490e-b065-d2ce7ee1567d" containerName="registry" containerID="cri-o://361b73b8a392606d46b72682540045b4df3a4bf83b068a8dab1993dc20255545" gracePeriod=30 Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.589260 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.733207 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/329bb364-3958-490e-b065-d2ce7ee1567d-ca-trust-extracted\") pod \"329bb364-3958-490e-b065-d2ce7ee1567d\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.733292 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/329bb364-3958-490e-b065-d2ce7ee1567d-bound-sa-token\") pod \"329bb364-3958-490e-b065-d2ce7ee1567d\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.733337 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/329bb364-3958-490e-b065-d2ce7ee1567d-registry-tls\") pod \"329bb364-3958-490e-b065-d2ce7ee1567d\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.733367 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/329bb364-3958-490e-b065-d2ce7ee1567d-installation-pull-secrets\") pod \"329bb364-3958-490e-b065-d2ce7ee1567d\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.733391 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2df2f\" (UniqueName: \"kubernetes.io/projected/329bb364-3958-490e-b065-d2ce7ee1567d-kube-api-access-2df2f\") pod \"329bb364-3958-490e-b065-d2ce7ee1567d\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.733428 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/329bb364-3958-490e-b065-d2ce7ee1567d-registry-certificates\") pod \"329bb364-3958-490e-b065-d2ce7ee1567d\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.733572 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"329bb364-3958-490e-b065-d2ce7ee1567d\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.733601 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/329bb364-3958-490e-b065-d2ce7ee1567d-trusted-ca\") pod \"329bb364-3958-490e-b065-d2ce7ee1567d\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.734370 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/329bb364-3958-490e-b065-d2ce7ee1567d-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "329bb364-3958-490e-b065-d2ce7ee1567d" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.734544 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/329bb364-3958-490e-b065-d2ce7ee1567d-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "329bb364-3958-490e-b065-d2ce7ee1567d" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.739047 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/329bb364-3958-490e-b065-d2ce7ee1567d-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "329bb364-3958-490e-b065-d2ce7ee1567d" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.739814 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/329bb364-3958-490e-b065-d2ce7ee1567d-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "329bb364-3958-490e-b065-d2ce7ee1567d" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.741216 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/329bb364-3958-490e-b065-d2ce7ee1567d-kube-api-access-2df2f" (OuterVolumeSpecName: "kube-api-access-2df2f") pod "329bb364-3958-490e-b065-d2ce7ee1567d" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d"). InnerVolumeSpecName "kube-api-access-2df2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.741245 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/329bb364-3958-490e-b065-d2ce7ee1567d-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "329bb364-3958-490e-b065-d2ce7ee1567d" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.749312 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/329bb364-3958-490e-b065-d2ce7ee1567d-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "329bb364-3958-490e-b065-d2ce7ee1567d" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.754273 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "329bb364-3958-490e-b065-d2ce7ee1567d" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.835331 4763 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/329bb364-3958-490e-b065-d2ce7ee1567d-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.835393 4763 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/329bb364-3958-490e-b065-d2ce7ee1567d-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.835420 4763 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/329bb364-3958-490e-b065-d2ce7ee1567d-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.835445 4763 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/329bb364-3958-490e-b065-d2ce7ee1567d-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.835467 4763 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/329bb364-3958-490e-b065-d2ce7ee1567d-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.835490 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2df2f\" (UniqueName: \"kubernetes.io/projected/329bb364-3958-490e-b065-d2ce7ee1567d-kube-api-access-2df2f\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.835602 4763 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/329bb364-3958-490e-b065-d2ce7ee1567d-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:28 crc kubenswrapper[4763]: I0131 15:01:28.292928 4763 generic.go:334] "Generic (PLEG): container finished" podID="329bb364-3958-490e-b065-d2ce7ee1567d" containerID="361b73b8a392606d46b72682540045b4df3a4bf83b068a8dab1993dc20255545" exitCode=0 Jan 31 15:01:28 crc kubenswrapper[4763]: I0131 15:01:28.292982 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" event={"ID":"329bb364-3958-490e-b065-d2ce7ee1567d","Type":"ContainerDied","Data":"361b73b8a392606d46b72682540045b4df3a4bf83b068a8dab1993dc20255545"} Jan 31 15:01:28 crc kubenswrapper[4763]: I0131 15:01:28.293025 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" event={"ID":"329bb364-3958-490e-b065-d2ce7ee1567d","Type":"ContainerDied","Data":"f6421d1d39f19dfe9997df0c879a0f9ff7802342de47df550a2b31d059ccd341"} Jan 31 15:01:28 crc kubenswrapper[4763]: I0131 15:01:28.293030 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 15:01:28 crc kubenswrapper[4763]: I0131 15:01:28.293047 4763 scope.go:117] "RemoveContainer" containerID="361b73b8a392606d46b72682540045b4df3a4bf83b068a8dab1993dc20255545" Jan 31 15:01:28 crc kubenswrapper[4763]: I0131 15:01:28.319317 4763 scope.go:117] "RemoveContainer" containerID="361b73b8a392606d46b72682540045b4df3a4bf83b068a8dab1993dc20255545" Jan 31 15:01:28 crc kubenswrapper[4763]: E0131 15:01:28.319854 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"361b73b8a392606d46b72682540045b4df3a4bf83b068a8dab1993dc20255545\": container with ID starting with 361b73b8a392606d46b72682540045b4df3a4bf83b068a8dab1993dc20255545 not found: ID does not exist" containerID="361b73b8a392606d46b72682540045b4df3a4bf83b068a8dab1993dc20255545" Jan 31 15:01:28 crc kubenswrapper[4763]: I0131 15:01:28.319907 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"361b73b8a392606d46b72682540045b4df3a4bf83b068a8dab1993dc20255545"} err="failed to get container status \"361b73b8a392606d46b72682540045b4df3a4bf83b068a8dab1993dc20255545\": rpc error: code = NotFound desc = could not find container \"361b73b8a392606d46b72682540045b4df3a4bf83b068a8dab1993dc20255545\": container with ID starting with 361b73b8a392606d46b72682540045b4df3a4bf83b068a8dab1993dc20255545 not found: ID does not exist" Jan 31 15:01:28 crc kubenswrapper[4763]: I0131 15:01:28.347015 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dzr7c"] Jan 31 15:01:28 crc kubenswrapper[4763]: I0131 15:01:28.368128 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dzr7c"] Jan 31 15:01:29 crc kubenswrapper[4763]: I0131 15:01:29.054351 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="329bb364-3958-490e-b065-d2ce7ee1567d" path="/var/lib/kubelet/pods/329bb364-3958-490e-b065-d2ce7ee1567d/volumes" Jan 31 15:01:44 crc kubenswrapper[4763]: I0131 15:01:44.177564 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:01:44 crc kubenswrapper[4763]: I0131 15:01:44.178898 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:01:44 crc kubenswrapper[4763]: I0131 15:01:44.178982 4763 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 15:01:44 crc kubenswrapper[4763]: I0131 15:01:44.179758 4763 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b55482ed2ed4c41f6d8a4ff378f658a5880b1596f1e3f6ef24577f1114937a3b"} pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 15:01:44 crc kubenswrapper[4763]: I0131 15:01:44.179848 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" containerID="cri-o://b55482ed2ed4c41f6d8a4ff378f658a5880b1596f1e3f6ef24577f1114937a3b" gracePeriod=600 Jan 31 15:01:44 crc kubenswrapper[4763]: I0131 15:01:44.402168 4763 generic.go:334] "Generic (PLEG): container finished" podID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerID="b55482ed2ed4c41f6d8a4ff378f658a5880b1596f1e3f6ef24577f1114937a3b" exitCode=0 Jan 31 15:01:44 crc kubenswrapper[4763]: I0131 15:01:44.402225 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerDied","Data":"b55482ed2ed4c41f6d8a4ff378f658a5880b1596f1e3f6ef24577f1114937a3b"} Jan 31 15:01:44 crc kubenswrapper[4763]: I0131 15:01:44.402303 4763 scope.go:117] "RemoveContainer" containerID="3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb" Jan 31 15:01:45 crc kubenswrapper[4763]: I0131 15:01:45.412221 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerStarted","Data":"9efbc169d52bc3e7c2ea07dab22d5bc7a445634e3b2db84476ba82a91a9cf629"} Jan 31 15:03:44 crc kubenswrapper[4763]: I0131 15:03:44.177848 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:03:44 crc kubenswrapper[4763]: I0131 15:03:44.178414 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:04:14 crc kubenswrapper[4763]: I0131 15:04:14.177098 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:04:14 crc kubenswrapper[4763]: I0131 15:04:14.177841 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:04:44 crc kubenswrapper[4763]: I0131 15:04:44.177211 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:04:44 crc kubenswrapper[4763]: I0131 15:04:44.177821 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:04:44 crc kubenswrapper[4763]: I0131 15:04:44.177888 4763 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 15:04:44 crc kubenswrapper[4763]: I0131 15:04:44.178716 4763 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9efbc169d52bc3e7c2ea07dab22d5bc7a445634e3b2db84476ba82a91a9cf629"} pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 15:04:44 crc kubenswrapper[4763]: I0131 15:04:44.178823 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" containerID="cri-o://9efbc169d52bc3e7c2ea07dab22d5bc7a445634e3b2db84476ba82a91a9cf629" gracePeriod=600 Jan 31 15:04:44 crc kubenswrapper[4763]: I0131 15:04:44.636218 4763 generic.go:334] "Generic (PLEG): container finished" podID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerID="9efbc169d52bc3e7c2ea07dab22d5bc7a445634e3b2db84476ba82a91a9cf629" exitCode=0 Jan 31 15:04:44 crc kubenswrapper[4763]: I0131 15:04:44.636285 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerDied","Data":"9efbc169d52bc3e7c2ea07dab22d5bc7a445634e3b2db84476ba82a91a9cf629"} Jan 31 15:04:44 crc kubenswrapper[4763]: I0131 15:04:44.636336 4763 scope.go:117] "RemoveContainer" containerID="b55482ed2ed4c41f6d8a4ff378f658a5880b1596f1e3f6ef24577f1114937a3b" Jan 31 15:04:45 crc kubenswrapper[4763]: I0131 15:04:45.648775 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerStarted","Data":"b6da57a4479d9d9e6a720f55c269fed96eb09e97fe8b846af0fb3bb3cfb46085"} Jan 31 15:05:38 crc kubenswrapper[4763]: I0131 15:05:38.447053 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dtknf"] Jan 31 15:05:38 crc kubenswrapper[4763]: I0131 15:05:38.448313 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovn-controller" containerID="cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8" gracePeriod=30 Jan 31 15:05:38 crc kubenswrapper[4763]: I0131 15:05:38.448821 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="sbdb" containerID="cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b" gracePeriod=30 Jan 31 15:05:38 crc kubenswrapper[4763]: I0131 15:05:38.448879 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="nbdb" containerID="cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e" gracePeriod=30 Jan 31 15:05:38 crc kubenswrapper[4763]: I0131 15:05:38.448930 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="northd" containerID="cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294" gracePeriod=30 Jan 31 15:05:38 crc kubenswrapper[4763]: I0131 15:05:38.448966 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="kube-rbac-proxy-node" containerID="cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b" gracePeriod=30 Jan 31 15:05:38 crc kubenswrapper[4763]: I0131 15:05:38.448989 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovn-acl-logging" containerID="cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453" gracePeriod=30 Jan 31 15:05:38 crc kubenswrapper[4763]: I0131 15:05:38.449288 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d" gracePeriod=30 Jan 31 15:05:38 crc kubenswrapper[4763]: I0131 15:05:38.485136 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovnkube-controller" containerID="cri-o://0dbc532ebe28b0235c423161c9ad89a344c1f544a333aeb218dae16072e95df9" gracePeriod=30 Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.003239 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qzkhg_2335d04f-10b2-4cf8-aae6-236650539c74/kube-multus/2.log" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.004231 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qzkhg_2335d04f-10b2-4cf8-aae6-236650539c74/kube-multus/1.log" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.004298 4763 generic.go:334] "Generic (PLEG): container finished" podID="2335d04f-10b2-4cf8-aae6-236650539c74" containerID="2769dbbb45eb5d98d9a4121f2a136f097f2dd1032e3c1238029f201a1307a3a6" exitCode=2 Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.004416 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qzkhg" event={"ID":"2335d04f-10b2-4cf8-aae6-236650539c74","Type":"ContainerDied","Data":"2769dbbb45eb5d98d9a4121f2a136f097f2dd1032e3c1238029f201a1307a3a6"} Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.004504 4763 scope.go:117] "RemoveContainer" containerID="ab1c5cd9f9fb249e12361258bc6e5e83c1ac27131f803a258757f251382f9d44" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.005469 4763 scope.go:117] "RemoveContainer" containerID="2769dbbb45eb5d98d9a4121f2a136f097f2dd1032e3c1238029f201a1307a3a6" Jan 31 15:05:39 crc kubenswrapper[4763]: E0131 15:05:39.006013 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-qzkhg_openshift-multus(2335d04f-10b2-4cf8-aae6-236650539c74)\"" pod="openshift-multus/multus-qzkhg" podUID="2335d04f-10b2-4cf8-aae6-236650539c74" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.008614 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovnkube-controller/3.log" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.012505 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovn-acl-logging/0.log" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.013233 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovn-controller/0.log" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.013726 4763 generic.go:334] "Generic (PLEG): container finished" podID="047ce610-09fa-482b-8d29-45ad376d12b3" containerID="0dbc532ebe28b0235c423161c9ad89a344c1f544a333aeb218dae16072e95df9" exitCode=0 Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.013766 4763 generic.go:334] "Generic (PLEG): container finished" podID="047ce610-09fa-482b-8d29-45ad376d12b3" containerID="3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b" exitCode=0 Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.013783 4763 generic.go:334] "Generic (PLEG): container finished" podID="047ce610-09fa-482b-8d29-45ad376d12b3" containerID="92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e" exitCode=0 Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.013782 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerDied","Data":"0dbc532ebe28b0235c423161c9ad89a344c1f544a333aeb218dae16072e95df9"} Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.013884 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerDied","Data":"3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b"} Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.013919 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerDied","Data":"92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e"} Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.013944 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerDied","Data":"67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294"} Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.013799 4763 generic.go:334] "Generic (PLEG): container finished" podID="047ce610-09fa-482b-8d29-45ad376d12b3" containerID="67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294" exitCode=0 Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.013995 4763 generic.go:334] "Generic (PLEG): container finished" podID="047ce610-09fa-482b-8d29-45ad376d12b3" containerID="2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d" exitCode=0 Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.014029 4763 generic.go:334] "Generic (PLEG): container finished" podID="047ce610-09fa-482b-8d29-45ad376d12b3" containerID="897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b" exitCode=0 Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.014051 4763 generic.go:334] "Generic (PLEG): container finished" podID="047ce610-09fa-482b-8d29-45ad376d12b3" containerID="c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453" exitCode=143 Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.014068 4763 generic.go:334] "Generic (PLEG): container finished" podID="047ce610-09fa-482b-8d29-45ad376d12b3" containerID="b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8" exitCode=143 Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.014094 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerDied","Data":"2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d"} Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.014115 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerDied","Data":"897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b"} Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.014133 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerDied","Data":"c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453"} Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.014151 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerDied","Data":"b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8"} Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.046260 4763 scope.go:117] "RemoveContainer" containerID="c1d9a3481eb12d0b1658e4141e9fa237e24ea0a712155a96e32eb577f03cd862" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.195424 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovn-acl-logging/0.log" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.197264 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovn-controller/0.log" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.198316 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.257583 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mjppx"] Jan 31 15:05:39 crc kubenswrapper[4763]: E0131 15:05:39.257808 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.257819 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 15:05:39 crc kubenswrapper[4763]: E0131 15:05:39.257829 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovn-acl-logging" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.257835 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovn-acl-logging" Jan 31 15:05:39 crc kubenswrapper[4763]: E0131 15:05:39.257843 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="sbdb" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.257849 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="sbdb" Jan 31 15:05:39 crc kubenswrapper[4763]: E0131 15:05:39.257859 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovn-controller" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.257866 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovn-controller" Jan 31 15:05:39 crc kubenswrapper[4763]: E0131 15:05:39.257875 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovnkube-controller" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.257880 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovnkube-controller" Jan 31 15:05:39 crc kubenswrapper[4763]: E0131 15:05:39.257887 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovnkube-controller" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.257892 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovnkube-controller" Jan 31 15:05:39 crc kubenswrapper[4763]: E0131 15:05:39.257900 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="nbdb" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.257905 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="nbdb" Jan 31 15:05:39 crc kubenswrapper[4763]: E0131 15:05:39.257916 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovnkube-controller" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.257921 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovnkube-controller" Jan 31 15:05:39 crc kubenswrapper[4763]: E0131 15:05:39.257927 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="329bb364-3958-490e-b065-d2ce7ee1567d" containerName="registry" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.257932 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="329bb364-3958-490e-b065-d2ce7ee1567d" containerName="registry" Jan 31 15:05:39 crc kubenswrapper[4763]: E0131 15:05:39.257947 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovnkube-controller" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.257952 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovnkube-controller" Jan 31 15:05:39 crc kubenswrapper[4763]: E0131 15:05:39.257960 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="northd" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.257965 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="northd" Jan 31 15:05:39 crc kubenswrapper[4763]: E0131 15:05:39.257974 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="kubecfg-setup" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.257979 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="kubecfg-setup" Jan 31 15:05:39 crc kubenswrapper[4763]: E0131 15:05:39.257986 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="kube-rbac-proxy-node" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.257991 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="kube-rbac-proxy-node" Jan 31 15:05:39 crc kubenswrapper[4763]: E0131 15:05:39.257998 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovnkube-controller" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.258003 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovnkube-controller" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.258087 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="329bb364-3958-490e-b065-d2ce7ee1567d" containerName="registry" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.258095 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovnkube-controller" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.258104 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovn-acl-logging" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.258110 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovnkube-controller" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.258117 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="nbdb" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.258125 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="kube-rbac-proxy-node" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.258139 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.258145 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="northd" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.258153 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovn-controller" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.258159 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="sbdb" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.258166 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovnkube-controller" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.258322 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovnkube-controller" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.258331 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovnkube-controller" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.259717 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284171 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-cni-netd\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284226 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-cni-bin\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284264 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-run-netns\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284308 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/047ce610-09fa-482b-8d29-45ad376d12b3-ovn-node-metrics-cert\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284331 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-run-systemd\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284350 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-log-socket\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284377 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/047ce610-09fa-482b-8d29-45ad376d12b3-ovnkube-config\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284404 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-kubelet\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284430 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/047ce610-09fa-482b-8d29-45ad376d12b3-env-overrides\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284462 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284485 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-systemd-units\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284513 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-node-log\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284530 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-slash\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284554 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/047ce610-09fa-482b-8d29-45ad376d12b3-ovnkube-script-lib\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284584 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxcsm\" (UniqueName: \"kubernetes.io/projected/047ce610-09fa-482b-8d29-45ad376d12b3-kube-api-access-rxcsm\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284607 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-run-openvswitch\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284625 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-var-lib-openvswitch\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284644 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-run-ovn\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284671 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-etc-openvswitch\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284719 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-run-ovn-kubernetes\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284991 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.285026 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.285046 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.285066 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.287309 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-node-log" (OuterVolumeSpecName: "node-log") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.287346 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-log-socket" (OuterVolumeSpecName: "log-socket") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.287744 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/047ce610-09fa-482b-8d29-45ad376d12b3-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.287795 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.287916 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.287969 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.288007 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.288017 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.288034 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.288062 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-slash" (OuterVolumeSpecName: "host-slash") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.287987 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.288271 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/047ce610-09fa-482b-8d29-45ad376d12b3-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.290554 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/047ce610-09fa-482b-8d29-45ad376d12b3-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.291985 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/047ce610-09fa-482b-8d29-45ad376d12b3-kube-api-access-rxcsm" (OuterVolumeSpecName: "kube-api-access-rxcsm") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "kube-api-access-rxcsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.292432 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/047ce610-09fa-482b-8d29-45ad376d12b3-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.314489 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.386920 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-slash\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.387066 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-cni-bin\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.387120 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pszp\" (UniqueName: \"kubernetes.io/projected/d24141e3-c7ee-4d60-ac74-d439fc532720-kube-api-access-2pszp\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.387150 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d24141e3-c7ee-4d60-ac74-d439fc532720-ovnkube-script-lib\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.387185 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-run-systemd\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.387205 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-log-socket\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.387282 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-run-netns\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.387352 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d24141e3-c7ee-4d60-ac74-d439fc532720-env-overrides\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.387376 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-systemd-units\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.387413 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-node-log\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.387433 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d24141e3-c7ee-4d60-ac74-d439fc532720-ovnkube-config\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.387458 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-run-openvswitch\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.387549 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-etc-openvswitch\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.387623 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-run-ovn-kubernetes\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.387683 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-run-ovn\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.387777 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.387803 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d24141e3-c7ee-4d60-ac74-d439fc532720-ovn-node-metrics-cert\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.387864 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-var-lib-openvswitch\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.387905 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-kubelet\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.387949 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-cni-netd\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388055 4763 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/047ce610-09fa-482b-8d29-45ad376d12b3-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388077 4763 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388091 4763 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-log-socket\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388103 4763 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/047ce610-09fa-482b-8d29-45ad376d12b3-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388114 4763 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388126 4763 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/047ce610-09fa-482b-8d29-45ad376d12b3-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388139 4763 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388152 4763 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388166 4763 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-node-log\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388178 4763 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-slash\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388190 4763 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/047ce610-09fa-482b-8d29-45ad376d12b3-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388205 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxcsm\" (UniqueName: \"kubernetes.io/projected/047ce610-09fa-482b-8d29-45ad376d12b3-kube-api-access-rxcsm\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388219 4763 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388231 4763 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388242 4763 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388254 4763 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388266 4763 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388280 4763 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388292 4763 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388305 4763 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489387 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-etc-openvswitch\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489438 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-run-ovn-kubernetes\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489465 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-etc-openvswitch\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489473 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-run-ovn\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489500 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-run-ovn\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489517 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489534 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d24141e3-c7ee-4d60-ac74-d439fc532720-ovn-node-metrics-cert\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489543 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-run-ovn-kubernetes\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489562 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-var-lib-openvswitch\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489580 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-kubelet\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489600 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-cni-netd\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489627 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-slash\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489636 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-var-lib-openvswitch\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489662 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-cni-bin\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489667 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-cni-netd\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489685 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-kubelet\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489681 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489721 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-cni-bin\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489732 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pszp\" (UniqueName: \"kubernetes.io/projected/d24141e3-c7ee-4d60-ac74-d439fc532720-kube-api-access-2pszp\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489879 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d24141e3-c7ee-4d60-ac74-d439fc532720-ovnkube-script-lib\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489946 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-run-systemd\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489980 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-log-socket\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.490018 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-run-netns\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.490052 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d24141e3-c7ee-4d60-ac74-d439fc532720-env-overrides\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.490082 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-systemd-units\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.490121 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-node-log\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.490178 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d24141e3-c7ee-4d60-ac74-d439fc532720-ovnkube-config\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.490213 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-run-openvswitch\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489743 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-slash\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.490343 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-run-openvswitch\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.490392 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-run-systemd\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.490432 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-systemd-units\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.490471 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-node-log\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.490591 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d24141e3-c7ee-4d60-ac74-d439fc532720-ovnkube-script-lib\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.490644 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-log-socket\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.490669 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-run-netns\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.491391 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d24141e3-c7ee-4d60-ac74-d439fc532720-env-overrides\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.491462 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d24141e3-c7ee-4d60-ac74-d439fc532720-ovnkube-config\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.494666 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d24141e3-c7ee-4d60-ac74-d439fc532720-ovn-node-metrics-cert\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.509308 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pszp\" (UniqueName: \"kubernetes.io/projected/d24141e3-c7ee-4d60-ac74-d439fc532720-kube-api-access-2pszp\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.572810 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: W0131 15:05:39.595328 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd24141e3_c7ee_4d60_ac74_d439fc532720.slice/crio-7d826b376e1643b32f90e1390a5cc55300ae3d6a9a8dabbfc750dfffbf14f210 WatchSource:0}: Error finding container 7d826b376e1643b32f90e1390a5cc55300ae3d6a9a8dabbfc750dfffbf14f210: Status 404 returned error can't find the container with id 7d826b376e1643b32f90e1390a5cc55300ae3d6a9a8dabbfc750dfffbf14f210 Jan 31 15:05:40 crc kubenswrapper[4763]: I0131 15:05:40.025355 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovn-acl-logging/0.log" Jan 31 15:05:40 crc kubenswrapper[4763]: I0131 15:05:40.025931 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovn-controller/0.log" Jan 31 15:05:40 crc kubenswrapper[4763]: I0131 15:05:40.026421 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerDied","Data":"be99d205ca17bdd3dd3cbb28a994ab179c5742dad18ab9a3579a8c9f686ccdc3"} Jan 31 15:05:40 crc kubenswrapper[4763]: I0131 15:05:40.026498 4763 scope.go:117] "RemoveContainer" containerID="0dbc532ebe28b0235c423161c9ad89a344c1f544a333aeb218dae16072e95df9" Jan 31 15:05:40 crc kubenswrapper[4763]: I0131 15:05:40.026444 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 15:05:40 crc kubenswrapper[4763]: I0131 15:05:40.028176 4763 generic.go:334] "Generic (PLEG): container finished" podID="d24141e3-c7ee-4d60-ac74-d439fc532720" containerID="08f4a5d8ad494132e7532f03956285ddad16e65672eccccc10e297e1724de243" exitCode=0 Jan 31 15:05:40 crc kubenswrapper[4763]: I0131 15:05:40.028226 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" event={"ID":"d24141e3-c7ee-4d60-ac74-d439fc532720","Type":"ContainerDied","Data":"08f4a5d8ad494132e7532f03956285ddad16e65672eccccc10e297e1724de243"} Jan 31 15:05:40 crc kubenswrapper[4763]: I0131 15:05:40.028338 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" event={"ID":"d24141e3-c7ee-4d60-ac74-d439fc532720","Type":"ContainerStarted","Data":"7d826b376e1643b32f90e1390a5cc55300ae3d6a9a8dabbfc750dfffbf14f210"} Jan 31 15:05:40 crc kubenswrapper[4763]: I0131 15:05:40.030581 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qzkhg_2335d04f-10b2-4cf8-aae6-236650539c74/kube-multus/2.log" Jan 31 15:05:40 crc kubenswrapper[4763]: I0131 15:05:40.044666 4763 scope.go:117] "RemoveContainer" containerID="3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b" Jan 31 15:05:40 crc kubenswrapper[4763]: I0131 15:05:40.062114 4763 scope.go:117] "RemoveContainer" containerID="92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e" Jan 31 15:05:40 crc kubenswrapper[4763]: I0131 15:05:40.113226 4763 scope.go:117] "RemoveContainer" containerID="67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294" Jan 31 15:05:40 crc kubenswrapper[4763]: I0131 15:05:40.115767 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dtknf"] Jan 31 15:05:40 crc kubenswrapper[4763]: I0131 15:05:40.125468 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dtknf"] Jan 31 15:05:40 crc kubenswrapper[4763]: I0131 15:05:40.132710 4763 scope.go:117] "RemoveContainer" containerID="2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d" Jan 31 15:05:40 crc kubenswrapper[4763]: I0131 15:05:40.157079 4763 scope.go:117] "RemoveContainer" containerID="897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b" Jan 31 15:05:40 crc kubenswrapper[4763]: I0131 15:05:40.178793 4763 scope.go:117] "RemoveContainer" containerID="c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453" Jan 31 15:05:40 crc kubenswrapper[4763]: I0131 15:05:40.197340 4763 scope.go:117] "RemoveContainer" containerID="b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8" Jan 31 15:05:40 crc kubenswrapper[4763]: I0131 15:05:40.215886 4763 scope.go:117] "RemoveContainer" containerID="1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549" Jan 31 15:05:41 crc kubenswrapper[4763]: I0131 15:05:41.052800 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" path="/var/lib/kubelet/pods/047ce610-09fa-482b-8d29-45ad376d12b3/volumes" Jan 31 15:05:41 crc kubenswrapper[4763]: I0131 15:05:41.056616 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" event={"ID":"d24141e3-c7ee-4d60-ac74-d439fc532720","Type":"ContainerStarted","Data":"9c2add3a8a3fbcef344b9a19a3aa64d09e2b7eb60dc559075aa9f54de70cd752"} Jan 31 15:05:41 crc kubenswrapper[4763]: I0131 15:05:41.056873 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" event={"ID":"d24141e3-c7ee-4d60-ac74-d439fc532720","Type":"ContainerStarted","Data":"de9ecaabae2e3435689dbd9017f7b542fa61df999165b29765d4ef328e2dd914"} Jan 31 15:05:41 crc kubenswrapper[4763]: I0131 15:05:41.057662 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" event={"ID":"d24141e3-c7ee-4d60-ac74-d439fc532720","Type":"ContainerStarted","Data":"98caaa33cfdada43a1933595b463cfec1a3e0b3010fd0c27456b3a1430369489"} Jan 31 15:05:41 crc kubenswrapper[4763]: I0131 15:05:41.058601 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" event={"ID":"d24141e3-c7ee-4d60-ac74-d439fc532720","Type":"ContainerStarted","Data":"cd951a815a6dc616248fc284c8df27c4bec4f0c58b1817a630af68c6da69e08b"} Jan 31 15:05:41 crc kubenswrapper[4763]: I0131 15:05:41.058652 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" event={"ID":"d24141e3-c7ee-4d60-ac74-d439fc532720","Type":"ContainerStarted","Data":"9c2549a254ed45b1ca19c7a204cf50f64f0dc072fd9e4419b42531201b549086"} Jan 31 15:05:41 crc kubenswrapper[4763]: I0131 15:05:41.058739 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" event={"ID":"d24141e3-c7ee-4d60-ac74-d439fc532720","Type":"ContainerStarted","Data":"b700b43ddb7611ad546584c4fa3cc9cbbd4d4124924b74cccc22aa5956294e50"} Jan 31 15:05:44 crc kubenswrapper[4763]: I0131 15:05:44.073828 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" event={"ID":"d24141e3-c7ee-4d60-ac74-d439fc532720","Type":"ContainerStarted","Data":"fd87e806f75c0972b66a831419b5bba5bf5774245c847d1bf0d63622debbb397"} Jan 31 15:05:46 crc kubenswrapper[4763]: I0131 15:05:46.090988 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" event={"ID":"d24141e3-c7ee-4d60-ac74-d439fc532720","Type":"ContainerStarted","Data":"99bf5316311a8a521366a4c89ffae9ea8446b7b412cd5c8c72503407df0c0b6d"} Jan 31 15:05:46 crc kubenswrapper[4763]: I0131 15:05:46.091299 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:46 crc kubenswrapper[4763]: I0131 15:05:46.122668 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:46 crc kubenswrapper[4763]: I0131 15:05:46.125121 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" podStartSLOduration=7.125105742 podStartE2EDuration="7.125105742s" podCreationTimestamp="2026-01-31 15:05:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:05:46.124502566 +0000 UTC m=+665.879240859" watchObservedRunningTime="2026-01-31 15:05:46.125105742 +0000 UTC m=+665.879844035" Jan 31 15:05:47 crc kubenswrapper[4763]: I0131 15:05:47.099495 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:47 crc kubenswrapper[4763]: I0131 15:05:47.099817 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:47 crc kubenswrapper[4763]: I0131 15:05:47.145321 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:52 crc kubenswrapper[4763]: I0131 15:05:52.041665 4763 scope.go:117] "RemoveContainer" containerID="2769dbbb45eb5d98d9a4121f2a136f097f2dd1032e3c1238029f201a1307a3a6" Jan 31 15:05:52 crc kubenswrapper[4763]: E0131 15:05:52.042439 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-qzkhg_openshift-multus(2335d04f-10b2-4cf8-aae6-236650539c74)\"" pod="openshift-multus/multus-qzkhg" podUID="2335d04f-10b2-4cf8-aae6-236650539c74" Jan 31 15:06:03 crc kubenswrapper[4763]: I0131 15:06:03.857465 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9"] Jan 31 15:06:03 crc kubenswrapper[4763]: I0131 15:06:03.858932 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:03 crc kubenswrapper[4763]: I0131 15:06:03.861440 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 31 15:06:03 crc kubenswrapper[4763]: I0131 15:06:03.882407 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9"] Jan 31 15:06:04 crc kubenswrapper[4763]: I0131 15:06:04.041837 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljvjx\" (UniqueName: \"kubernetes.io/projected/0f29e959-ca5d-4407-ac1d-4ce7001597aa-kube-api-access-ljvjx\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9\" (UID: \"0f29e959-ca5d-4407-ac1d-4ce7001597aa\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:04 crc kubenswrapper[4763]: I0131 15:06:04.041956 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0f29e959-ca5d-4407-ac1d-4ce7001597aa-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9\" (UID: \"0f29e959-ca5d-4407-ac1d-4ce7001597aa\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:04 crc kubenswrapper[4763]: I0131 15:06:04.042114 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0f29e959-ca5d-4407-ac1d-4ce7001597aa-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9\" (UID: \"0f29e959-ca5d-4407-ac1d-4ce7001597aa\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:04 crc kubenswrapper[4763]: I0131 15:06:04.144163 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0f29e959-ca5d-4407-ac1d-4ce7001597aa-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9\" (UID: \"0f29e959-ca5d-4407-ac1d-4ce7001597aa\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:04 crc kubenswrapper[4763]: I0131 15:06:04.144241 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0f29e959-ca5d-4407-ac1d-4ce7001597aa-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9\" (UID: \"0f29e959-ca5d-4407-ac1d-4ce7001597aa\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:04 crc kubenswrapper[4763]: I0131 15:06:04.144361 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljvjx\" (UniqueName: \"kubernetes.io/projected/0f29e959-ca5d-4407-ac1d-4ce7001597aa-kube-api-access-ljvjx\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9\" (UID: \"0f29e959-ca5d-4407-ac1d-4ce7001597aa\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:04 crc kubenswrapper[4763]: I0131 15:06:04.145057 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0f29e959-ca5d-4407-ac1d-4ce7001597aa-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9\" (UID: \"0f29e959-ca5d-4407-ac1d-4ce7001597aa\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:04 crc kubenswrapper[4763]: I0131 15:06:04.145103 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0f29e959-ca5d-4407-ac1d-4ce7001597aa-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9\" (UID: \"0f29e959-ca5d-4407-ac1d-4ce7001597aa\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:04 crc kubenswrapper[4763]: I0131 15:06:04.179145 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljvjx\" (UniqueName: \"kubernetes.io/projected/0f29e959-ca5d-4407-ac1d-4ce7001597aa-kube-api-access-ljvjx\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9\" (UID: \"0f29e959-ca5d-4407-ac1d-4ce7001597aa\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:04 crc kubenswrapper[4763]: I0131 15:06:04.189287 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:04 crc kubenswrapper[4763]: E0131 15:06:04.230126 4763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9_openshift-marketplace_0f29e959-ca5d-4407-ac1d-4ce7001597aa_0(505c2f4dc04f7e5a02fe7bec0da92416ba7ef5214523cc0ba45df18284bbebe1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 15:06:04 crc kubenswrapper[4763]: E0131 15:06:04.230337 4763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9_openshift-marketplace_0f29e959-ca5d-4407-ac1d-4ce7001597aa_0(505c2f4dc04f7e5a02fe7bec0da92416ba7ef5214523cc0ba45df18284bbebe1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:04 crc kubenswrapper[4763]: E0131 15:06:04.230494 4763 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9_openshift-marketplace_0f29e959-ca5d-4407-ac1d-4ce7001597aa_0(505c2f4dc04f7e5a02fe7bec0da92416ba7ef5214523cc0ba45df18284bbebe1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:04 crc kubenswrapper[4763]: E0131 15:06:04.230737 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9_openshift-marketplace(0f29e959-ca5d-4407-ac1d-4ce7001597aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9_openshift-marketplace(0f29e959-ca5d-4407-ac1d-4ce7001597aa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9_openshift-marketplace_0f29e959-ca5d-4407-ac1d-4ce7001597aa_0(505c2f4dc04f7e5a02fe7bec0da92416ba7ef5214523cc0ba45df18284bbebe1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" podUID="0f29e959-ca5d-4407-ac1d-4ce7001597aa" Jan 31 15:06:05 crc kubenswrapper[4763]: I0131 15:06:05.225916 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:05 crc kubenswrapper[4763]: I0131 15:06:05.228134 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:05 crc kubenswrapper[4763]: E0131 15:06:05.272441 4763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9_openshift-marketplace_0f29e959-ca5d-4407-ac1d-4ce7001597aa_0(3f8b5b2abb629a09a12b9b076453a0d145f1440d83e9a15ad0fd9754bf35af68): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 15:06:05 crc kubenswrapper[4763]: E0131 15:06:05.272597 4763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9_openshift-marketplace_0f29e959-ca5d-4407-ac1d-4ce7001597aa_0(3f8b5b2abb629a09a12b9b076453a0d145f1440d83e9a15ad0fd9754bf35af68): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:05 crc kubenswrapper[4763]: E0131 15:06:05.272673 4763 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9_openshift-marketplace_0f29e959-ca5d-4407-ac1d-4ce7001597aa_0(3f8b5b2abb629a09a12b9b076453a0d145f1440d83e9a15ad0fd9754bf35af68): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:05 crc kubenswrapper[4763]: E0131 15:06:05.272797 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9_openshift-marketplace(0f29e959-ca5d-4407-ac1d-4ce7001597aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9_openshift-marketplace(0f29e959-ca5d-4407-ac1d-4ce7001597aa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9_openshift-marketplace_0f29e959-ca5d-4407-ac1d-4ce7001597aa_0(3f8b5b2abb629a09a12b9b076453a0d145f1440d83e9a15ad0fd9754bf35af68): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" podUID="0f29e959-ca5d-4407-ac1d-4ce7001597aa" Jan 31 15:06:06 crc kubenswrapper[4763]: I0131 15:06:06.042101 4763 scope.go:117] "RemoveContainer" containerID="2769dbbb45eb5d98d9a4121f2a136f097f2dd1032e3c1238029f201a1307a3a6" Jan 31 15:06:06 crc kubenswrapper[4763]: I0131 15:06:06.233232 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qzkhg_2335d04f-10b2-4cf8-aae6-236650539c74/kube-multus/2.log" Jan 31 15:06:06 crc kubenswrapper[4763]: I0131 15:06:06.233631 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qzkhg" event={"ID":"2335d04f-10b2-4cf8-aae6-236650539c74","Type":"ContainerStarted","Data":"dfc6f9a262aaf2bc365c77362441daa01ca565a39142be499c9cfc6db48f9cf3"} Jan 31 15:06:09 crc kubenswrapper[4763]: I0131 15:06:09.612370 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:06:19 crc kubenswrapper[4763]: I0131 15:06:19.042217 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:19 crc kubenswrapper[4763]: I0131 15:06:19.043751 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:19 crc kubenswrapper[4763]: I0131 15:06:19.509452 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9"] Jan 31 15:06:19 crc kubenswrapper[4763]: W0131 15:06:19.520188 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f29e959_ca5d_4407_ac1d_4ce7001597aa.slice/crio-766e3d11bab8a63b7a7560ca252c7dabe92e4c8ca5a7e51b720c4aca0e1b57ab WatchSource:0}: Error finding container 766e3d11bab8a63b7a7560ca252c7dabe92e4c8ca5a7e51b720c4aca0e1b57ab: Status 404 returned error can't find the container with id 766e3d11bab8a63b7a7560ca252c7dabe92e4c8ca5a7e51b720c4aca0e1b57ab Jan 31 15:06:20 crc kubenswrapper[4763]: I0131 15:06:20.327377 4763 generic.go:334] "Generic (PLEG): container finished" podID="0f29e959-ca5d-4407-ac1d-4ce7001597aa" containerID="da83aa006c3b39539aeb81a1f57720f82d7a61cee890999fbb01f1ff6988e938" exitCode=0 Jan 31 15:06:20 crc kubenswrapper[4763]: I0131 15:06:20.327830 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" event={"ID":"0f29e959-ca5d-4407-ac1d-4ce7001597aa","Type":"ContainerDied","Data":"da83aa006c3b39539aeb81a1f57720f82d7a61cee890999fbb01f1ff6988e938"} Jan 31 15:06:20 crc kubenswrapper[4763]: I0131 15:06:20.327873 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" event={"ID":"0f29e959-ca5d-4407-ac1d-4ce7001597aa","Type":"ContainerStarted","Data":"766e3d11bab8a63b7a7560ca252c7dabe92e4c8ca5a7e51b720c4aca0e1b57ab"} Jan 31 15:06:20 crc kubenswrapper[4763]: I0131 15:06:20.331177 4763 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 15:06:22 crc kubenswrapper[4763]: I0131 15:06:22.347464 4763 generic.go:334] "Generic (PLEG): container finished" podID="0f29e959-ca5d-4407-ac1d-4ce7001597aa" containerID="c23ff5eda4c8dfaee8eef876aae6d41938f9119db97f424a68199a88bc1d1de3" exitCode=0 Jan 31 15:06:22 crc kubenswrapper[4763]: I0131 15:06:22.347602 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" event={"ID":"0f29e959-ca5d-4407-ac1d-4ce7001597aa","Type":"ContainerDied","Data":"c23ff5eda4c8dfaee8eef876aae6d41938f9119db97f424a68199a88bc1d1de3"} Jan 31 15:06:23 crc kubenswrapper[4763]: I0131 15:06:23.358370 4763 generic.go:334] "Generic (PLEG): container finished" podID="0f29e959-ca5d-4407-ac1d-4ce7001597aa" containerID="af0cb0c8d9e6478983d739d3cee2c3f17435f2214d8bb6285918c1e7c543c836" exitCode=0 Jan 31 15:06:23 crc kubenswrapper[4763]: I0131 15:06:23.358442 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" event={"ID":"0f29e959-ca5d-4407-ac1d-4ce7001597aa","Type":"ContainerDied","Data":"af0cb0c8d9e6478983d739d3cee2c3f17435f2214d8bb6285918c1e7c543c836"} Jan 31 15:06:24 crc kubenswrapper[4763]: I0131 15:06:24.621913 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:24 crc kubenswrapper[4763]: I0131 15:06:24.726344 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljvjx\" (UniqueName: \"kubernetes.io/projected/0f29e959-ca5d-4407-ac1d-4ce7001597aa-kube-api-access-ljvjx\") pod \"0f29e959-ca5d-4407-ac1d-4ce7001597aa\" (UID: \"0f29e959-ca5d-4407-ac1d-4ce7001597aa\") " Jan 31 15:06:24 crc kubenswrapper[4763]: I0131 15:06:24.726507 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0f29e959-ca5d-4407-ac1d-4ce7001597aa-util\") pod \"0f29e959-ca5d-4407-ac1d-4ce7001597aa\" (UID: \"0f29e959-ca5d-4407-ac1d-4ce7001597aa\") " Jan 31 15:06:24 crc kubenswrapper[4763]: I0131 15:06:24.726557 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0f29e959-ca5d-4407-ac1d-4ce7001597aa-bundle\") pod \"0f29e959-ca5d-4407-ac1d-4ce7001597aa\" (UID: \"0f29e959-ca5d-4407-ac1d-4ce7001597aa\") " Jan 31 15:06:24 crc kubenswrapper[4763]: I0131 15:06:24.728070 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f29e959-ca5d-4407-ac1d-4ce7001597aa-bundle" (OuterVolumeSpecName: "bundle") pod "0f29e959-ca5d-4407-ac1d-4ce7001597aa" (UID: "0f29e959-ca5d-4407-ac1d-4ce7001597aa"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:06:24 crc kubenswrapper[4763]: I0131 15:06:24.735603 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f29e959-ca5d-4407-ac1d-4ce7001597aa-kube-api-access-ljvjx" (OuterVolumeSpecName: "kube-api-access-ljvjx") pod "0f29e959-ca5d-4407-ac1d-4ce7001597aa" (UID: "0f29e959-ca5d-4407-ac1d-4ce7001597aa"). InnerVolumeSpecName "kube-api-access-ljvjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:06:24 crc kubenswrapper[4763]: I0131 15:06:24.749601 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f29e959-ca5d-4407-ac1d-4ce7001597aa-util" (OuterVolumeSpecName: "util") pod "0f29e959-ca5d-4407-ac1d-4ce7001597aa" (UID: "0f29e959-ca5d-4407-ac1d-4ce7001597aa"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:06:24 crc kubenswrapper[4763]: I0131 15:06:24.828138 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljvjx\" (UniqueName: \"kubernetes.io/projected/0f29e959-ca5d-4407-ac1d-4ce7001597aa-kube-api-access-ljvjx\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:24 crc kubenswrapper[4763]: I0131 15:06:24.828182 4763 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0f29e959-ca5d-4407-ac1d-4ce7001597aa-util\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:24 crc kubenswrapper[4763]: I0131 15:06:24.828201 4763 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0f29e959-ca5d-4407-ac1d-4ce7001597aa-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:25 crc kubenswrapper[4763]: I0131 15:06:25.374311 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" event={"ID":"0f29e959-ca5d-4407-ac1d-4ce7001597aa","Type":"ContainerDied","Data":"766e3d11bab8a63b7a7560ca252c7dabe92e4c8ca5a7e51b720c4aca0e1b57ab"} Jan 31 15:06:25 crc kubenswrapper[4763]: I0131 15:06:25.374371 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="766e3d11bab8a63b7a7560ca252c7dabe92e4c8ca5a7e51b720c4aca0e1b57ab" Jan 31 15:06:25 crc kubenswrapper[4763]: I0131 15:06:25.374381 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.158531 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-64b6b97b4f-gbf25"] Jan 31 15:06:37 crc kubenswrapper[4763]: E0131 15:06:37.159309 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f29e959-ca5d-4407-ac1d-4ce7001597aa" containerName="extract" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.159325 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f29e959-ca5d-4407-ac1d-4ce7001597aa" containerName="extract" Jan 31 15:06:37 crc kubenswrapper[4763]: E0131 15:06:37.159348 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f29e959-ca5d-4407-ac1d-4ce7001597aa" containerName="pull" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.159356 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f29e959-ca5d-4407-ac1d-4ce7001597aa" containerName="pull" Jan 31 15:06:37 crc kubenswrapper[4763]: E0131 15:06:37.159367 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f29e959-ca5d-4407-ac1d-4ce7001597aa" containerName="util" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.159379 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f29e959-ca5d-4407-ac1d-4ce7001597aa" containerName="util" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.159495 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f29e959-ca5d-4407-ac1d-4ce7001597aa" containerName="extract" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.160030 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-64b6b97b4f-gbf25" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.162747 4763 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.163343 4763 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-dw2bh" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.171733 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.171898 4763 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.171735 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.192248 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-64b6b97b4f-gbf25"] Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.324296 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5a42a356-dc67-417c-b291-c079e880aa79-webhook-cert\") pod \"metallb-operator-controller-manager-64b6b97b4f-gbf25\" (UID: \"5a42a356-dc67-417c-b291-c079e880aa79\") " pod="metallb-system/metallb-operator-controller-manager-64b6b97b4f-gbf25" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.324350 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5a42a356-dc67-417c-b291-c079e880aa79-apiservice-cert\") pod \"metallb-operator-controller-manager-64b6b97b4f-gbf25\" (UID: \"5a42a356-dc67-417c-b291-c079e880aa79\") " pod="metallb-system/metallb-operator-controller-manager-64b6b97b4f-gbf25" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.324390 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfgjc\" (UniqueName: \"kubernetes.io/projected/5a42a356-dc67-417c-b291-c079e880aa79-kube-api-access-rfgjc\") pod \"metallb-operator-controller-manager-64b6b97b4f-gbf25\" (UID: \"5a42a356-dc67-417c-b291-c079e880aa79\") " pod="metallb-system/metallb-operator-controller-manager-64b6b97b4f-gbf25" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.390151 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6448f7d6f6-k9gcj"] Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.390894 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6448f7d6f6-k9gcj" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.392370 4763 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.393688 4763 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.393922 4763 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-bkfwf" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.412647 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6448f7d6f6-k9gcj"] Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.425439 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5a42a356-dc67-417c-b291-c079e880aa79-webhook-cert\") pod \"metallb-operator-controller-manager-64b6b97b4f-gbf25\" (UID: \"5a42a356-dc67-417c-b291-c079e880aa79\") " pod="metallb-system/metallb-operator-controller-manager-64b6b97b4f-gbf25" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.425713 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5a42a356-dc67-417c-b291-c079e880aa79-apiservice-cert\") pod \"metallb-operator-controller-manager-64b6b97b4f-gbf25\" (UID: \"5a42a356-dc67-417c-b291-c079e880aa79\") " pod="metallb-system/metallb-operator-controller-manager-64b6b97b4f-gbf25" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.425849 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfgjc\" (UniqueName: \"kubernetes.io/projected/5a42a356-dc67-417c-b291-c079e880aa79-kube-api-access-rfgjc\") pod \"metallb-operator-controller-manager-64b6b97b4f-gbf25\" (UID: \"5a42a356-dc67-417c-b291-c079e880aa79\") " pod="metallb-system/metallb-operator-controller-manager-64b6b97b4f-gbf25" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.431298 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5a42a356-dc67-417c-b291-c079e880aa79-apiservice-cert\") pod \"metallb-operator-controller-manager-64b6b97b4f-gbf25\" (UID: \"5a42a356-dc67-417c-b291-c079e880aa79\") " pod="metallb-system/metallb-operator-controller-manager-64b6b97b4f-gbf25" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.431943 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5a42a356-dc67-417c-b291-c079e880aa79-webhook-cert\") pod \"metallb-operator-controller-manager-64b6b97b4f-gbf25\" (UID: \"5a42a356-dc67-417c-b291-c079e880aa79\") " pod="metallb-system/metallb-operator-controller-manager-64b6b97b4f-gbf25" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.444384 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfgjc\" (UniqueName: \"kubernetes.io/projected/5a42a356-dc67-417c-b291-c079e880aa79-kube-api-access-rfgjc\") pod \"metallb-operator-controller-manager-64b6b97b4f-gbf25\" (UID: \"5a42a356-dc67-417c-b291-c079e880aa79\") " pod="metallb-system/metallb-operator-controller-manager-64b6b97b4f-gbf25" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.477289 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-64b6b97b4f-gbf25" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.527407 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7zvs\" (UniqueName: \"kubernetes.io/projected/911c2e7f-03a5-49a2-8db7-5c63c602ef29-kube-api-access-g7zvs\") pod \"metallb-operator-webhook-server-6448f7d6f6-k9gcj\" (UID: \"911c2e7f-03a5-49a2-8db7-5c63c602ef29\") " pod="metallb-system/metallb-operator-webhook-server-6448f7d6f6-k9gcj" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.527464 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/911c2e7f-03a5-49a2-8db7-5c63c602ef29-webhook-cert\") pod \"metallb-operator-webhook-server-6448f7d6f6-k9gcj\" (UID: \"911c2e7f-03a5-49a2-8db7-5c63c602ef29\") " pod="metallb-system/metallb-operator-webhook-server-6448f7d6f6-k9gcj" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.527499 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/911c2e7f-03a5-49a2-8db7-5c63c602ef29-apiservice-cert\") pod \"metallb-operator-webhook-server-6448f7d6f6-k9gcj\" (UID: \"911c2e7f-03a5-49a2-8db7-5c63c602ef29\") " pod="metallb-system/metallb-operator-webhook-server-6448f7d6f6-k9gcj" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.629045 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/911c2e7f-03a5-49a2-8db7-5c63c602ef29-webhook-cert\") pod \"metallb-operator-webhook-server-6448f7d6f6-k9gcj\" (UID: \"911c2e7f-03a5-49a2-8db7-5c63c602ef29\") " pod="metallb-system/metallb-operator-webhook-server-6448f7d6f6-k9gcj" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.629111 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/911c2e7f-03a5-49a2-8db7-5c63c602ef29-apiservice-cert\") pod \"metallb-operator-webhook-server-6448f7d6f6-k9gcj\" (UID: \"911c2e7f-03a5-49a2-8db7-5c63c602ef29\") " pod="metallb-system/metallb-operator-webhook-server-6448f7d6f6-k9gcj" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.629178 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7zvs\" (UniqueName: \"kubernetes.io/projected/911c2e7f-03a5-49a2-8db7-5c63c602ef29-kube-api-access-g7zvs\") pod \"metallb-operator-webhook-server-6448f7d6f6-k9gcj\" (UID: \"911c2e7f-03a5-49a2-8db7-5c63c602ef29\") " pod="metallb-system/metallb-operator-webhook-server-6448f7d6f6-k9gcj" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.634478 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/911c2e7f-03a5-49a2-8db7-5c63c602ef29-apiservice-cert\") pod \"metallb-operator-webhook-server-6448f7d6f6-k9gcj\" (UID: \"911c2e7f-03a5-49a2-8db7-5c63c602ef29\") " pod="metallb-system/metallb-operator-webhook-server-6448f7d6f6-k9gcj" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.635954 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/911c2e7f-03a5-49a2-8db7-5c63c602ef29-webhook-cert\") pod \"metallb-operator-webhook-server-6448f7d6f6-k9gcj\" (UID: \"911c2e7f-03a5-49a2-8db7-5c63c602ef29\") " pod="metallb-system/metallb-operator-webhook-server-6448f7d6f6-k9gcj" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.645885 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7zvs\" (UniqueName: \"kubernetes.io/projected/911c2e7f-03a5-49a2-8db7-5c63c602ef29-kube-api-access-g7zvs\") pod \"metallb-operator-webhook-server-6448f7d6f6-k9gcj\" (UID: \"911c2e7f-03a5-49a2-8db7-5c63c602ef29\") " pod="metallb-system/metallb-operator-webhook-server-6448f7d6f6-k9gcj" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.702267 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6448f7d6f6-k9gcj" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.896348 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6448f7d6f6-k9gcj"] Jan 31 15:06:37 crc kubenswrapper[4763]: W0131 15:06:37.903808 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod911c2e7f_03a5_49a2_8db7_5c63c602ef29.slice/crio-ed16c24acd643b1d907f56137191fb7b7054e8ac3b529547345eb9610ee7faf6 WatchSource:0}: Error finding container ed16c24acd643b1d907f56137191fb7b7054e8ac3b529547345eb9610ee7faf6: Status 404 returned error can't find the container with id ed16c24acd643b1d907f56137191fb7b7054e8ac3b529547345eb9610ee7faf6 Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.945058 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-64b6b97b4f-gbf25"] Jan 31 15:06:38 crc kubenswrapper[4763]: I0131 15:06:38.446253 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6448f7d6f6-k9gcj" event={"ID":"911c2e7f-03a5-49a2-8db7-5c63c602ef29","Type":"ContainerStarted","Data":"ed16c24acd643b1d907f56137191fb7b7054e8ac3b529547345eb9610ee7faf6"} Jan 31 15:06:38 crc kubenswrapper[4763]: I0131 15:06:38.447421 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-64b6b97b4f-gbf25" event={"ID":"5a42a356-dc67-417c-b291-c079e880aa79","Type":"ContainerStarted","Data":"49d472a5350891e0e5a1a129c47b3d26a95753531d92f824e0e0974756f8421d"} Jan 31 15:06:42 crc kubenswrapper[4763]: I0131 15:06:42.487352 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-64b6b97b4f-gbf25" event={"ID":"5a42a356-dc67-417c-b291-c079e880aa79","Type":"ContainerStarted","Data":"750b34df214c48096aa6d17e78d662f78f824d835bcbd13e98c41c320c2d6db8"} Jan 31 15:06:42 crc kubenswrapper[4763]: I0131 15:06:42.487916 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-64b6b97b4f-gbf25" Jan 31 15:06:42 crc kubenswrapper[4763]: I0131 15:06:42.496238 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6448f7d6f6-k9gcj" event={"ID":"911c2e7f-03a5-49a2-8db7-5c63c602ef29","Type":"ContainerStarted","Data":"31884fcc5659db72f3dfc678e7113f4032b571741d095651212dde512160e971"} Jan 31 15:06:42 crc kubenswrapper[4763]: I0131 15:06:42.496636 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6448f7d6f6-k9gcj" Jan 31 15:06:42 crc kubenswrapper[4763]: I0131 15:06:42.518185 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-64b6b97b4f-gbf25" podStartSLOduration=1.325176317 podStartE2EDuration="5.518165563s" podCreationTimestamp="2026-01-31 15:06:37 +0000 UTC" firstStartedPulling="2026-01-31 15:06:37.965090833 +0000 UTC m=+717.719829126" lastFinishedPulling="2026-01-31 15:06:42.158080069 +0000 UTC m=+721.912818372" observedRunningTime="2026-01-31 15:06:42.515840992 +0000 UTC m=+722.270579275" watchObservedRunningTime="2026-01-31 15:06:42.518165563 +0000 UTC m=+722.272903856" Jan 31 15:06:42 crc kubenswrapper[4763]: I0131 15:06:42.553565 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6448f7d6f6-k9gcj" podStartSLOduration=1.290207969 podStartE2EDuration="5.553541501s" podCreationTimestamp="2026-01-31 15:06:37 +0000 UTC" firstStartedPulling="2026-01-31 15:06:37.909831823 +0000 UTC m=+717.664570116" lastFinishedPulling="2026-01-31 15:06:42.173165315 +0000 UTC m=+721.927903648" observedRunningTime="2026-01-31 15:06:42.539059131 +0000 UTC m=+722.293797424" watchObservedRunningTime="2026-01-31 15:06:42.553541501 +0000 UTC m=+722.308279794" Jan 31 15:06:44 crc kubenswrapper[4763]: I0131 15:06:44.176829 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:06:44 crc kubenswrapper[4763]: I0131 15:06:44.176902 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:06:57 crc kubenswrapper[4763]: I0131 15:06:57.709493 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6448f7d6f6-k9gcj" Jan 31 15:07:12 crc kubenswrapper[4763]: I0131 15:07:12.167422 4763 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 15:07:14 crc kubenswrapper[4763]: I0131 15:07:14.176971 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:07:14 crc kubenswrapper[4763]: I0131 15:07:14.177042 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:07:17 crc kubenswrapper[4763]: I0131 15:07:17.481423 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-64b6b97b4f-gbf25" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.187216 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-wwdkt"] Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.188142 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wwdkt" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.190932 4763 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.191019 4763 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-7bbkb" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.191945 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-ft4k2"] Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.207886 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.212099 4763 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.212447 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.250835 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-wwdkt"] Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.283638 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-kf27r"] Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.285616 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-kf27r" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.289268 4763 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.289679 4763 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-rvj58" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.289879 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.290041 4763 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.291046 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-f4wjv"] Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.292137 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-f4wjv" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.302001 4763 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.318298 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-f4wjv"] Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.387995 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d9c89dc4-758c-449e-bd6c-76f27ee6ecec-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-wwdkt\" (UID: \"d9c89dc4-758c-449e-bd6c-76f27ee6ecec\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wwdkt" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.388061 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fe7a08d-0d51-422f-9477-932841b77158-metrics-certs\") pod \"speaker-kf27r\" (UID: \"8fe7a08d-0d51-422f-9477-932841b77158\") " pod="metallb-system/speaker-kf27r" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.388089 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8fe7a08d-0d51-422f-9477-932841b77158-metallb-excludel2\") pod \"speaker-kf27r\" (UID: \"8fe7a08d-0d51-422f-9477-932841b77158\") " pod="metallb-system/speaker-kf27r" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.388131 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx5gt\" (UniqueName: \"kubernetes.io/projected/8fe7a08d-0d51-422f-9477-932841b77158-kube-api-access-qx5gt\") pod \"speaker-kf27r\" (UID: \"8fe7a08d-0d51-422f-9477-932841b77158\") " pod="metallb-system/speaker-kf27r" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.388353 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/35cf5cc4-3973-4d1c-b52a-804293bb1f25-frr-conf\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.388398 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hngng\" (UniqueName: \"kubernetes.io/projected/d9c89dc4-758c-449e-bd6c-76f27ee6ecec-kube-api-access-hngng\") pod \"frr-k8s-webhook-server-7df86c4f6c-wwdkt\" (UID: \"d9c89dc4-758c-449e-bd6c-76f27ee6ecec\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wwdkt" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.388426 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35cf5cc4-3973-4d1c-b52a-804293bb1f25-metrics-certs\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.388453 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/35cf5cc4-3973-4d1c-b52a-804293bb1f25-reloader\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.388595 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/35cf5cc4-3973-4d1c-b52a-804293bb1f25-frr-startup\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.388643 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqcss\" (UniqueName: \"kubernetes.io/projected/35cf5cc4-3973-4d1c-b52a-804293bb1f25-kube-api-access-vqcss\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.388668 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8fe7a08d-0d51-422f-9477-932841b77158-memberlist\") pod \"speaker-kf27r\" (UID: \"8fe7a08d-0d51-422f-9477-932841b77158\") " pod="metallb-system/speaker-kf27r" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.388721 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/35cf5cc4-3973-4d1c-b52a-804293bb1f25-frr-sockets\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.388790 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/35cf5cc4-3973-4d1c-b52a-804293bb1f25-metrics\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.489851 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fe7a08d-0d51-422f-9477-932841b77158-metrics-certs\") pod \"speaker-kf27r\" (UID: \"8fe7a08d-0d51-422f-9477-932841b77158\") " pod="metallb-system/speaker-kf27r" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.489898 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8fe7a08d-0d51-422f-9477-932841b77158-metallb-excludel2\") pod \"speaker-kf27r\" (UID: \"8fe7a08d-0d51-422f-9477-932841b77158\") " pod="metallb-system/speaker-kf27r" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.489921 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx5gt\" (UniqueName: \"kubernetes.io/projected/8fe7a08d-0d51-422f-9477-932841b77158-kube-api-access-qx5gt\") pod \"speaker-kf27r\" (UID: \"8fe7a08d-0d51-422f-9477-932841b77158\") " pod="metallb-system/speaker-kf27r" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.489947 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/35cf5cc4-3973-4d1c-b52a-804293bb1f25-frr-conf\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.489970 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/30f91c96-0c0b-4426-986d-715d11a222b3-cert\") pod \"controller-6968d8fdc4-f4wjv\" (UID: \"30f91c96-0c0b-4426-986d-715d11a222b3\") " pod="metallb-system/controller-6968d8fdc4-f4wjv" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.489991 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hngng\" (UniqueName: \"kubernetes.io/projected/d9c89dc4-758c-449e-bd6c-76f27ee6ecec-kube-api-access-hngng\") pod \"frr-k8s-webhook-server-7df86c4f6c-wwdkt\" (UID: \"d9c89dc4-758c-449e-bd6c-76f27ee6ecec\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wwdkt" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.490009 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35cf5cc4-3973-4d1c-b52a-804293bb1f25-metrics-certs\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.490030 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/35cf5cc4-3973-4d1c-b52a-804293bb1f25-reloader\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.490047 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/35cf5cc4-3973-4d1c-b52a-804293bb1f25-frr-startup\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.490067 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqcss\" (UniqueName: \"kubernetes.io/projected/35cf5cc4-3973-4d1c-b52a-804293bb1f25-kube-api-access-vqcss\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.490084 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpp74\" (UniqueName: \"kubernetes.io/projected/30f91c96-0c0b-4426-986d-715d11a222b3-kube-api-access-zpp74\") pod \"controller-6968d8fdc4-f4wjv\" (UID: \"30f91c96-0c0b-4426-986d-715d11a222b3\") " pod="metallb-system/controller-6968d8fdc4-f4wjv" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.490364 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8fe7a08d-0d51-422f-9477-932841b77158-memberlist\") pod \"speaker-kf27r\" (UID: \"8fe7a08d-0d51-422f-9477-932841b77158\") " pod="metallb-system/speaker-kf27r" Jan 31 15:07:18 crc kubenswrapper[4763]: E0131 15:07:18.490441 4763 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.490465 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/35cf5cc4-3973-4d1c-b52a-804293bb1f25-frr-sockets\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: E0131 15:07:18.490493 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fe7a08d-0d51-422f-9477-932841b77158-memberlist podName:8fe7a08d-0d51-422f-9477-932841b77158 nodeName:}" failed. No retries permitted until 2026-01-31 15:07:18.990476417 +0000 UTC m=+758.745214700 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8fe7a08d-0d51-422f-9477-932841b77158-memberlist") pod "speaker-kf27r" (UID: "8fe7a08d-0d51-422f-9477-932841b77158") : secret "metallb-memberlist" not found Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.490553 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/30f91c96-0c0b-4426-986d-715d11a222b3-metrics-certs\") pod \"controller-6968d8fdc4-f4wjv\" (UID: \"30f91c96-0c0b-4426-986d-715d11a222b3\") " pod="metallb-system/controller-6968d8fdc4-f4wjv" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.490592 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/35cf5cc4-3973-4d1c-b52a-804293bb1f25-metrics\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.490628 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d9c89dc4-758c-449e-bd6c-76f27ee6ecec-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-wwdkt\" (UID: \"d9c89dc4-758c-449e-bd6c-76f27ee6ecec\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wwdkt" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.490638 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8fe7a08d-0d51-422f-9477-932841b77158-metallb-excludel2\") pod \"speaker-kf27r\" (UID: \"8fe7a08d-0d51-422f-9477-932841b77158\") " pod="metallb-system/speaker-kf27r" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.490584 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/35cf5cc4-3973-4d1c-b52a-804293bb1f25-frr-conf\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.490641 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/35cf5cc4-3973-4d1c-b52a-804293bb1f25-reloader\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.490889 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/35cf5cc4-3973-4d1c-b52a-804293bb1f25-frr-sockets\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.490986 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/35cf5cc4-3973-4d1c-b52a-804293bb1f25-metrics\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.491305 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/35cf5cc4-3973-4d1c-b52a-804293bb1f25-frr-startup\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.510294 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fe7a08d-0d51-422f-9477-932841b77158-metrics-certs\") pod \"speaker-kf27r\" (UID: \"8fe7a08d-0d51-422f-9477-932841b77158\") " pod="metallb-system/speaker-kf27r" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.510342 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35cf5cc4-3973-4d1c-b52a-804293bb1f25-metrics-certs\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.512303 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d9c89dc4-758c-449e-bd6c-76f27ee6ecec-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-wwdkt\" (UID: \"d9c89dc4-758c-449e-bd6c-76f27ee6ecec\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wwdkt" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.516421 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hngng\" (UniqueName: \"kubernetes.io/projected/d9c89dc4-758c-449e-bd6c-76f27ee6ecec-kube-api-access-hngng\") pod \"frr-k8s-webhook-server-7df86c4f6c-wwdkt\" (UID: \"d9c89dc4-758c-449e-bd6c-76f27ee6ecec\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wwdkt" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.517979 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wwdkt" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.526768 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx5gt\" (UniqueName: \"kubernetes.io/projected/8fe7a08d-0d51-422f-9477-932841b77158-kube-api-access-qx5gt\") pod \"speaker-kf27r\" (UID: \"8fe7a08d-0d51-422f-9477-932841b77158\") " pod="metallb-system/speaker-kf27r" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.528245 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqcss\" (UniqueName: \"kubernetes.io/projected/35cf5cc4-3973-4d1c-b52a-804293bb1f25-kube-api-access-vqcss\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.530359 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.591752 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/30f91c96-0c0b-4426-986d-715d11a222b3-cert\") pod \"controller-6968d8fdc4-f4wjv\" (UID: \"30f91c96-0c0b-4426-986d-715d11a222b3\") " pod="metallb-system/controller-6968d8fdc4-f4wjv" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.592028 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpp74\" (UniqueName: \"kubernetes.io/projected/30f91c96-0c0b-4426-986d-715d11a222b3-kube-api-access-zpp74\") pod \"controller-6968d8fdc4-f4wjv\" (UID: \"30f91c96-0c0b-4426-986d-715d11a222b3\") " pod="metallb-system/controller-6968d8fdc4-f4wjv" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.592073 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/30f91c96-0c0b-4426-986d-715d11a222b3-metrics-certs\") pod \"controller-6968d8fdc4-f4wjv\" (UID: \"30f91c96-0c0b-4426-986d-715d11a222b3\") " pod="metallb-system/controller-6968d8fdc4-f4wjv" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.593618 4763 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.596975 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/30f91c96-0c0b-4426-986d-715d11a222b3-metrics-certs\") pod \"controller-6968d8fdc4-f4wjv\" (UID: \"30f91c96-0c0b-4426-986d-715d11a222b3\") " pod="metallb-system/controller-6968d8fdc4-f4wjv" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.606817 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/30f91c96-0c0b-4426-986d-715d11a222b3-cert\") pod \"controller-6968d8fdc4-f4wjv\" (UID: \"30f91c96-0c0b-4426-986d-715d11a222b3\") " pod="metallb-system/controller-6968d8fdc4-f4wjv" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.607990 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpp74\" (UniqueName: \"kubernetes.io/projected/30f91c96-0c0b-4426-986d-715d11a222b3-kube-api-access-zpp74\") pod \"controller-6968d8fdc4-f4wjv\" (UID: \"30f91c96-0c0b-4426-986d-715d11a222b3\") " pod="metallb-system/controller-6968d8fdc4-f4wjv" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.633634 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-f4wjv" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.712272 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ft4k2" event={"ID":"35cf5cc4-3973-4d1c-b52a-804293bb1f25","Type":"ContainerStarted","Data":"1fc4a1ccb90fd122f17308fbd30e5fe162996920b0a4145c865eeec8dd6a52ba"} Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.728621 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-wwdkt"] Jan 31 15:07:18 crc kubenswrapper[4763]: W0131 15:07:18.729609 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9c89dc4_758c_449e_bd6c_76f27ee6ecec.slice/crio-a7dcb0cf0509a71aab5386e80f158438c3aa80945bd5e0ec64b7b965c69cf6c4 WatchSource:0}: Error finding container a7dcb0cf0509a71aab5386e80f158438c3aa80945bd5e0ec64b7b965c69cf6c4: Status 404 returned error can't find the container with id a7dcb0cf0509a71aab5386e80f158438c3aa80945bd5e0ec64b7b965c69cf6c4 Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.996044 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8fe7a08d-0d51-422f-9477-932841b77158-memberlist\") pod \"speaker-kf27r\" (UID: \"8fe7a08d-0d51-422f-9477-932841b77158\") " pod="metallb-system/speaker-kf27r" Jan 31 15:07:18 crc kubenswrapper[4763]: E0131 15:07:18.996284 4763 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 31 15:07:18 crc kubenswrapper[4763]: E0131 15:07:18.996555 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fe7a08d-0d51-422f-9477-932841b77158-memberlist podName:8fe7a08d-0d51-422f-9477-932841b77158 nodeName:}" failed. No retries permitted until 2026-01-31 15:07:19.996532874 +0000 UTC m=+759.751271167 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8fe7a08d-0d51-422f-9477-932841b77158-memberlist") pod "speaker-kf27r" (UID: "8fe7a08d-0d51-422f-9477-932841b77158") : secret "metallb-memberlist" not found Jan 31 15:07:19 crc kubenswrapper[4763]: I0131 15:07:19.090196 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-f4wjv"] Jan 31 15:07:19 crc kubenswrapper[4763]: I0131 15:07:19.719827 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wwdkt" event={"ID":"d9c89dc4-758c-449e-bd6c-76f27ee6ecec","Type":"ContainerStarted","Data":"a7dcb0cf0509a71aab5386e80f158438c3aa80945bd5e0ec64b7b965c69cf6c4"} Jan 31 15:07:19 crc kubenswrapper[4763]: I0131 15:07:19.721423 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-f4wjv" event={"ID":"30f91c96-0c0b-4426-986d-715d11a222b3","Type":"ContainerStarted","Data":"429257d4d504696214848305eccff0b19f84735a0bf3fb7e4d1dc571cab4bb9b"} Jan 31 15:07:19 crc kubenswrapper[4763]: I0131 15:07:19.721456 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-f4wjv" event={"ID":"30f91c96-0c0b-4426-986d-715d11a222b3","Type":"ContainerStarted","Data":"4387dbf4296daf6096866543a418fd4476afe84abfa4652518b2fb13e730235e"} Jan 31 15:07:20 crc kubenswrapper[4763]: I0131 15:07:20.010509 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8fe7a08d-0d51-422f-9477-932841b77158-memberlist\") pod \"speaker-kf27r\" (UID: \"8fe7a08d-0d51-422f-9477-932841b77158\") " pod="metallb-system/speaker-kf27r" Jan 31 15:07:20 crc kubenswrapper[4763]: I0131 15:07:20.018946 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8fe7a08d-0d51-422f-9477-932841b77158-memberlist\") pod \"speaker-kf27r\" (UID: \"8fe7a08d-0d51-422f-9477-932841b77158\") " pod="metallb-system/speaker-kf27r" Jan 31 15:07:20 crc kubenswrapper[4763]: I0131 15:07:20.109905 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-kf27r" Jan 31 15:07:20 crc kubenswrapper[4763]: W0131 15:07:20.139733 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fe7a08d_0d51_422f_9477_932841b77158.slice/crio-6e6f196411dd18b9d94a71fc99eafdda65f1d46718d50ffa2e0f5c4482e73d5a WatchSource:0}: Error finding container 6e6f196411dd18b9d94a71fc99eafdda65f1d46718d50ffa2e0f5c4482e73d5a: Status 404 returned error can't find the container with id 6e6f196411dd18b9d94a71fc99eafdda65f1d46718d50ffa2e0f5c4482e73d5a Jan 31 15:07:20 crc kubenswrapper[4763]: I0131 15:07:20.736659 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kf27r" event={"ID":"8fe7a08d-0d51-422f-9477-932841b77158","Type":"ContainerStarted","Data":"eb84555a6349b60fe3eff2843d08c010fead720b299e1f5d2fd517c6630e3506"} Jan 31 15:07:20 crc kubenswrapper[4763]: I0131 15:07:20.736720 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kf27r" event={"ID":"8fe7a08d-0d51-422f-9477-932841b77158","Type":"ContainerStarted","Data":"6e6f196411dd18b9d94a71fc99eafdda65f1d46718d50ffa2e0f5c4482e73d5a"} Jan 31 15:07:23 crc kubenswrapper[4763]: I0131 15:07:23.763170 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kf27r" event={"ID":"8fe7a08d-0d51-422f-9477-932841b77158","Type":"ContainerStarted","Data":"1ed73e23ba2e08700d98ae7cab6b4f22bb3aab76fc101eb97d61fd68ca5e1593"} Jan 31 15:07:23 crc kubenswrapper[4763]: I0131 15:07:23.763521 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-kf27r" Jan 31 15:07:23 crc kubenswrapper[4763]: I0131 15:07:23.771825 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-f4wjv" event={"ID":"30f91c96-0c0b-4426-986d-715d11a222b3","Type":"ContainerStarted","Data":"b37fc7cc42df4d8a0467c4decd4f3a29ff46e36f924e3f0187ff923d4ab40fe6"} Jan 31 15:07:23 crc kubenswrapper[4763]: I0131 15:07:23.772303 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-f4wjv" Jan 31 15:07:23 crc kubenswrapper[4763]: I0131 15:07:23.793301 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-kf27r" podStartSLOduration=3.428164625 podStartE2EDuration="5.793284039s" podCreationTimestamp="2026-01-31 15:07:18 +0000 UTC" firstStartedPulling="2026-01-31 15:07:20.441741037 +0000 UTC m=+760.196479330" lastFinishedPulling="2026-01-31 15:07:22.806860451 +0000 UTC m=+762.561598744" observedRunningTime="2026-01-31 15:07:23.780606396 +0000 UTC m=+763.535344689" watchObservedRunningTime="2026-01-31 15:07:23.793284039 +0000 UTC m=+763.548022332" Jan 31 15:07:23 crc kubenswrapper[4763]: I0131 15:07:23.802730 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-f4wjv" podStartSLOduration=2.253876254 podStartE2EDuration="5.802709667s" podCreationTimestamp="2026-01-31 15:07:18 +0000 UTC" firstStartedPulling="2026-01-31 15:07:19.253643813 +0000 UTC m=+759.008382166" lastFinishedPulling="2026-01-31 15:07:22.802477286 +0000 UTC m=+762.557215579" observedRunningTime="2026-01-31 15:07:23.798205679 +0000 UTC m=+763.552943972" watchObservedRunningTime="2026-01-31 15:07:23.802709667 +0000 UTC m=+763.557447960" Jan 31 15:07:26 crc kubenswrapper[4763]: I0131 15:07:26.791211 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wwdkt" event={"ID":"d9c89dc4-758c-449e-bd6c-76f27ee6ecec","Type":"ContainerStarted","Data":"4a8a8200947df19a17347c7e1c8765bc6f84296a0bac8799586287e283401f69"} Jan 31 15:07:26 crc kubenswrapper[4763]: I0131 15:07:26.793306 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wwdkt" Jan 31 15:07:26 crc kubenswrapper[4763]: I0131 15:07:26.793388 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ft4k2" event={"ID":"35cf5cc4-3973-4d1c-b52a-804293bb1f25","Type":"ContainerDied","Data":"af8c4f20e5951fe51fc462b54681a3afa8b058c208b40e8392d3b94efb4c16c7"} Jan 31 15:07:26 crc kubenswrapper[4763]: I0131 15:07:26.793240 4763 generic.go:334] "Generic (PLEG): container finished" podID="35cf5cc4-3973-4d1c-b52a-804293bb1f25" containerID="af8c4f20e5951fe51fc462b54681a3afa8b058c208b40e8392d3b94efb4c16c7" exitCode=0 Jan 31 15:07:26 crc kubenswrapper[4763]: I0131 15:07:26.813047 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wwdkt" podStartSLOduration=1.563502169 podStartE2EDuration="8.813023041s" podCreationTimestamp="2026-01-31 15:07:18 +0000 UTC" firstStartedPulling="2026-01-31 15:07:18.732219274 +0000 UTC m=+758.486957567" lastFinishedPulling="2026-01-31 15:07:25.981740136 +0000 UTC m=+765.736478439" observedRunningTime="2026-01-31 15:07:26.810476464 +0000 UTC m=+766.565214767" watchObservedRunningTime="2026-01-31 15:07:26.813023041 +0000 UTC m=+766.567761374" Jan 31 15:07:27 crc kubenswrapper[4763]: I0131 15:07:27.803281 4763 generic.go:334] "Generic (PLEG): container finished" podID="35cf5cc4-3973-4d1c-b52a-804293bb1f25" containerID="dc353e334658bc458deb25dfc745ce2859bb37591b50a71d564ddf83482d5107" exitCode=0 Jan 31 15:07:27 crc kubenswrapper[4763]: I0131 15:07:27.803350 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ft4k2" event={"ID":"35cf5cc4-3973-4d1c-b52a-804293bb1f25","Type":"ContainerDied","Data":"dc353e334658bc458deb25dfc745ce2859bb37591b50a71d564ddf83482d5107"} Jan 31 15:07:28 crc kubenswrapper[4763]: I0131 15:07:28.811353 4763 generic.go:334] "Generic (PLEG): container finished" podID="35cf5cc4-3973-4d1c-b52a-804293bb1f25" containerID="d27813fd97b27ee9cc877bc981f7a94d4b28da8825219ac87839bda047d52cab" exitCode=0 Jan 31 15:07:28 crc kubenswrapper[4763]: I0131 15:07:28.811461 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ft4k2" event={"ID":"35cf5cc4-3973-4d1c-b52a-804293bb1f25","Type":"ContainerDied","Data":"d27813fd97b27ee9cc877bc981f7a94d4b28da8825219ac87839bda047d52cab"} Jan 31 15:07:29 crc kubenswrapper[4763]: I0131 15:07:29.823885 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ft4k2" event={"ID":"35cf5cc4-3973-4d1c-b52a-804293bb1f25","Type":"ContainerStarted","Data":"9aa29f971c84fd7f058aa5e42fa186b3dd76a5e4ca3e96f18afa51b6a05653c3"} Jan 31 15:07:29 crc kubenswrapper[4763]: I0131 15:07:29.824313 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ft4k2" event={"ID":"35cf5cc4-3973-4d1c-b52a-804293bb1f25","Type":"ContainerStarted","Data":"3f4a4120ce9e745aa4eb17c5b98801f9e7d9ed260f86e066afc039a2df74ccc9"} Jan 31 15:07:29 crc kubenswrapper[4763]: I0131 15:07:29.824341 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:29 crc kubenswrapper[4763]: I0131 15:07:29.824357 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ft4k2" event={"ID":"35cf5cc4-3973-4d1c-b52a-804293bb1f25","Type":"ContainerStarted","Data":"56cd6bcb47e0aaa8d975742e12f3793b3f472becdfe6e50127eedad161392879"} Jan 31 15:07:29 crc kubenswrapper[4763]: I0131 15:07:29.824371 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ft4k2" event={"ID":"35cf5cc4-3973-4d1c-b52a-804293bb1f25","Type":"ContainerStarted","Data":"9f6f26276ab742cfdbf9c215ea8a998de0cec3d5aee011bca35bf58bf72e0370"} Jan 31 15:07:29 crc kubenswrapper[4763]: I0131 15:07:29.824387 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ft4k2" event={"ID":"35cf5cc4-3973-4d1c-b52a-804293bb1f25","Type":"ContainerStarted","Data":"9af1e51eef2e95d7801c8694ec506723b2705fd111ca099ffdbfd8a7a8ca64c2"} Jan 31 15:07:29 crc kubenswrapper[4763]: I0131 15:07:29.824401 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ft4k2" event={"ID":"35cf5cc4-3973-4d1c-b52a-804293bb1f25","Type":"ContainerStarted","Data":"3cb31b4fd6b84e8ce9d27b65fbda97162e964453d3788871e724d9f68e6f33d2"} Jan 31 15:07:29 crc kubenswrapper[4763]: I0131 15:07:29.849241 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-ft4k2" podStartSLOduration=4.606533822 podStartE2EDuration="11.849226905s" podCreationTimestamp="2026-01-31 15:07:18 +0000 UTC" firstStartedPulling="2026-01-31 15:07:18.689453961 +0000 UTC m=+758.444192254" lastFinishedPulling="2026-01-31 15:07:25.932147004 +0000 UTC m=+765.686885337" observedRunningTime="2026-01-31 15:07:29.847031397 +0000 UTC m=+769.601769700" watchObservedRunningTime="2026-01-31 15:07:29.849226905 +0000 UTC m=+769.603965198" Jan 31 15:07:30 crc kubenswrapper[4763]: I0131 15:07:30.117221 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-kf27r" Jan 31 15:07:33 crc kubenswrapper[4763]: I0131 15:07:33.531358 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:33 crc kubenswrapper[4763]: I0131 15:07:33.594451 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:36 crc kubenswrapper[4763]: I0131 15:07:36.014506 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-ttnjc"] Jan 31 15:07:36 crc kubenswrapper[4763]: I0131 15:07:36.016007 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-ttnjc" Jan 31 15:07:36 crc kubenswrapper[4763]: I0131 15:07:36.025100 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 31 15:07:36 crc kubenswrapper[4763]: I0131 15:07:36.025503 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 31 15:07:36 crc kubenswrapper[4763]: I0131 15:07:36.026250 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-ttnjc"] Jan 31 15:07:36 crc kubenswrapper[4763]: I0131 15:07:36.027922 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-index-dockercfg-gf28d" Jan 31 15:07:36 crc kubenswrapper[4763]: I0131 15:07:36.037519 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj4fd\" (UniqueName: \"kubernetes.io/projected/a6750409-e191-47cd-8abe-bf763a980ed5-kube-api-access-lj4fd\") pod \"mariadb-operator-index-ttnjc\" (UID: \"a6750409-e191-47cd-8abe-bf763a980ed5\") " pod="openstack-operators/mariadb-operator-index-ttnjc" Jan 31 15:07:36 crc kubenswrapper[4763]: I0131 15:07:36.138515 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj4fd\" (UniqueName: \"kubernetes.io/projected/a6750409-e191-47cd-8abe-bf763a980ed5-kube-api-access-lj4fd\") pod \"mariadb-operator-index-ttnjc\" (UID: \"a6750409-e191-47cd-8abe-bf763a980ed5\") " pod="openstack-operators/mariadb-operator-index-ttnjc" Jan 31 15:07:36 crc kubenswrapper[4763]: I0131 15:07:36.157076 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj4fd\" (UniqueName: \"kubernetes.io/projected/a6750409-e191-47cd-8abe-bf763a980ed5-kube-api-access-lj4fd\") pod \"mariadb-operator-index-ttnjc\" (UID: \"a6750409-e191-47cd-8abe-bf763a980ed5\") " pod="openstack-operators/mariadb-operator-index-ttnjc" Jan 31 15:07:36 crc kubenswrapper[4763]: I0131 15:07:36.339579 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-ttnjc" Jan 31 15:07:36 crc kubenswrapper[4763]: I0131 15:07:36.796283 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-ttnjc"] Jan 31 15:07:36 crc kubenswrapper[4763]: W0131 15:07:36.798196 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6750409_e191_47cd_8abe_bf763a980ed5.slice/crio-55812768ef9e9acb8f57d03f8867d4cd30a89d38fe9b2fe3f1de7204e6895a30 WatchSource:0}: Error finding container 55812768ef9e9acb8f57d03f8867d4cd30a89d38fe9b2fe3f1de7204e6895a30: Status 404 returned error can't find the container with id 55812768ef9e9acb8f57d03f8867d4cd30a89d38fe9b2fe3f1de7204e6895a30 Jan 31 15:07:36 crc kubenswrapper[4763]: I0131 15:07:36.873159 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-ttnjc" event={"ID":"a6750409-e191-47cd-8abe-bf763a980ed5","Type":"ContainerStarted","Data":"55812768ef9e9acb8f57d03f8867d4cd30a89d38fe9b2fe3f1de7204e6895a30"} Jan 31 15:07:37 crc kubenswrapper[4763]: I0131 15:07:37.880776 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-ttnjc" event={"ID":"a6750409-e191-47cd-8abe-bf763a980ed5","Type":"ContainerStarted","Data":"2eb386fd0219e205ad50dd211425524b310154ab635f166945c01cd0dd3b64c1"} Jan 31 15:07:37 crc kubenswrapper[4763]: I0131 15:07:37.900033 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-ttnjc" podStartSLOduration=2.104305513 podStartE2EDuration="2.900015244s" podCreationTimestamp="2026-01-31 15:07:35 +0000 UTC" firstStartedPulling="2026-01-31 15:07:36.800835776 +0000 UTC m=+776.555574079" lastFinishedPulling="2026-01-31 15:07:37.596545497 +0000 UTC m=+777.351283810" observedRunningTime="2026-01-31 15:07:37.897859147 +0000 UTC m=+777.652597520" watchObservedRunningTime="2026-01-31 15:07:37.900015244 +0000 UTC m=+777.654753537" Jan 31 15:07:38 crc kubenswrapper[4763]: I0131 15:07:38.523081 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wwdkt" Jan 31 15:07:38 crc kubenswrapper[4763]: I0131 15:07:38.535854 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:38 crc kubenswrapper[4763]: I0131 15:07:38.637875 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-f4wjv" Jan 31 15:07:39 crc kubenswrapper[4763]: I0131 15:07:39.398528 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-ttnjc"] Jan 31 15:07:39 crc kubenswrapper[4763]: I0131 15:07:39.897136 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-index-ttnjc" podUID="a6750409-e191-47cd-8abe-bf763a980ed5" containerName="registry-server" containerID="cri-o://2eb386fd0219e205ad50dd211425524b310154ab635f166945c01cd0dd3b64c1" gracePeriod=2 Jan 31 15:07:40 crc kubenswrapper[4763]: I0131 15:07:40.016288 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-d2rtv"] Jan 31 15:07:40 crc kubenswrapper[4763]: I0131 15:07:40.017138 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-d2rtv" Jan 31 15:07:40 crc kubenswrapper[4763]: I0131 15:07:40.026318 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-d2rtv"] Jan 31 15:07:40 crc kubenswrapper[4763]: I0131 15:07:40.096680 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lkhf\" (UniqueName: \"kubernetes.io/projected/29673dd0-5315-4de5-bbc4-d8deb8581b9d-kube-api-access-2lkhf\") pod \"mariadb-operator-index-d2rtv\" (UID: \"29673dd0-5315-4de5-bbc4-d8deb8581b9d\") " pod="openstack-operators/mariadb-operator-index-d2rtv" Jan 31 15:07:40 crc kubenswrapper[4763]: I0131 15:07:40.198367 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lkhf\" (UniqueName: \"kubernetes.io/projected/29673dd0-5315-4de5-bbc4-d8deb8581b9d-kube-api-access-2lkhf\") pod \"mariadb-operator-index-d2rtv\" (UID: \"29673dd0-5315-4de5-bbc4-d8deb8581b9d\") " pod="openstack-operators/mariadb-operator-index-d2rtv" Jan 31 15:07:40 crc kubenswrapper[4763]: I0131 15:07:40.221183 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lkhf\" (UniqueName: \"kubernetes.io/projected/29673dd0-5315-4de5-bbc4-d8deb8581b9d-kube-api-access-2lkhf\") pod \"mariadb-operator-index-d2rtv\" (UID: \"29673dd0-5315-4de5-bbc4-d8deb8581b9d\") " pod="openstack-operators/mariadb-operator-index-d2rtv" Jan 31 15:07:40 crc kubenswrapper[4763]: I0131 15:07:40.353990 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-d2rtv" Jan 31 15:07:40 crc kubenswrapper[4763]: I0131 15:07:40.631199 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-d2rtv"] Jan 31 15:07:40 crc kubenswrapper[4763]: W0131 15:07:40.638609 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29673dd0_5315_4de5_bbc4_d8deb8581b9d.slice/crio-432972debde771139e8dc45988516562069256c9b270fd3dc6b79f35df454384 WatchSource:0}: Error finding container 432972debde771139e8dc45988516562069256c9b270fd3dc6b79f35df454384: Status 404 returned error can't find the container with id 432972debde771139e8dc45988516562069256c9b270fd3dc6b79f35df454384 Jan 31 15:07:40 crc kubenswrapper[4763]: I0131 15:07:40.744904 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-ttnjc" Jan 31 15:07:40 crc kubenswrapper[4763]: I0131 15:07:40.905246 4763 generic.go:334] "Generic (PLEG): container finished" podID="a6750409-e191-47cd-8abe-bf763a980ed5" containerID="2eb386fd0219e205ad50dd211425524b310154ab635f166945c01cd0dd3b64c1" exitCode=0 Jan 31 15:07:40 crc kubenswrapper[4763]: I0131 15:07:40.905315 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-ttnjc" event={"ID":"a6750409-e191-47cd-8abe-bf763a980ed5","Type":"ContainerDied","Data":"2eb386fd0219e205ad50dd211425524b310154ab635f166945c01cd0dd3b64c1"} Jan 31 15:07:40 crc kubenswrapper[4763]: I0131 15:07:40.905330 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-ttnjc" Jan 31 15:07:40 crc kubenswrapper[4763]: I0131 15:07:40.906023 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-ttnjc" event={"ID":"a6750409-e191-47cd-8abe-bf763a980ed5","Type":"ContainerDied","Data":"55812768ef9e9acb8f57d03f8867d4cd30a89d38fe9b2fe3f1de7204e6895a30"} Jan 31 15:07:40 crc kubenswrapper[4763]: I0131 15:07:40.906143 4763 scope.go:117] "RemoveContainer" containerID="2eb386fd0219e205ad50dd211425524b310154ab635f166945c01cd0dd3b64c1" Jan 31 15:07:40 crc kubenswrapper[4763]: I0131 15:07:40.907329 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-d2rtv" event={"ID":"29673dd0-5315-4de5-bbc4-d8deb8581b9d","Type":"ContainerStarted","Data":"432972debde771139e8dc45988516562069256c9b270fd3dc6b79f35df454384"} Jan 31 15:07:40 crc kubenswrapper[4763]: I0131 15:07:40.907889 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lj4fd\" (UniqueName: \"kubernetes.io/projected/a6750409-e191-47cd-8abe-bf763a980ed5-kube-api-access-lj4fd\") pod \"a6750409-e191-47cd-8abe-bf763a980ed5\" (UID: \"a6750409-e191-47cd-8abe-bf763a980ed5\") " Jan 31 15:07:40 crc kubenswrapper[4763]: I0131 15:07:40.915848 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6750409-e191-47cd-8abe-bf763a980ed5-kube-api-access-lj4fd" (OuterVolumeSpecName: "kube-api-access-lj4fd") pod "a6750409-e191-47cd-8abe-bf763a980ed5" (UID: "a6750409-e191-47cd-8abe-bf763a980ed5"). InnerVolumeSpecName "kube-api-access-lj4fd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:07:40 crc kubenswrapper[4763]: I0131 15:07:40.927973 4763 scope.go:117] "RemoveContainer" containerID="2eb386fd0219e205ad50dd211425524b310154ab635f166945c01cd0dd3b64c1" Jan 31 15:07:40 crc kubenswrapper[4763]: E0131 15:07:40.928368 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2eb386fd0219e205ad50dd211425524b310154ab635f166945c01cd0dd3b64c1\": container with ID starting with 2eb386fd0219e205ad50dd211425524b310154ab635f166945c01cd0dd3b64c1 not found: ID does not exist" containerID="2eb386fd0219e205ad50dd211425524b310154ab635f166945c01cd0dd3b64c1" Jan 31 15:07:40 crc kubenswrapper[4763]: I0131 15:07:40.928395 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eb386fd0219e205ad50dd211425524b310154ab635f166945c01cd0dd3b64c1"} err="failed to get container status \"2eb386fd0219e205ad50dd211425524b310154ab635f166945c01cd0dd3b64c1\": rpc error: code = NotFound desc = could not find container \"2eb386fd0219e205ad50dd211425524b310154ab635f166945c01cd0dd3b64c1\": container with ID starting with 2eb386fd0219e205ad50dd211425524b310154ab635f166945c01cd0dd3b64c1 not found: ID does not exist" Jan 31 15:07:41 crc kubenswrapper[4763]: I0131 15:07:41.010900 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lj4fd\" (UniqueName: \"kubernetes.io/projected/a6750409-e191-47cd-8abe-bf763a980ed5-kube-api-access-lj4fd\") on node \"crc\" DevicePath \"\"" Jan 31 15:07:41 crc kubenswrapper[4763]: I0131 15:07:41.226465 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-ttnjc"] Jan 31 15:07:41 crc kubenswrapper[4763]: I0131 15:07:41.229620 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-index-ttnjc"] Jan 31 15:07:41 crc kubenswrapper[4763]: I0131 15:07:41.915441 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-d2rtv" event={"ID":"29673dd0-5315-4de5-bbc4-d8deb8581b9d","Type":"ContainerStarted","Data":"3cfc93c04ffb2a207f1737ca08129197882fbf66636f7a931f0211e2f4411773"} Jan 31 15:07:41 crc kubenswrapper[4763]: I0131 15:07:41.945737 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-d2rtv" podStartSLOduration=2.509736496 podStartE2EDuration="2.945713543s" podCreationTimestamp="2026-01-31 15:07:39 +0000 UTC" firstStartedPulling="2026-01-31 15:07:40.642790443 +0000 UTC m=+780.397528736" lastFinishedPulling="2026-01-31 15:07:41.07876749 +0000 UTC m=+780.833505783" observedRunningTime="2026-01-31 15:07:41.936607625 +0000 UTC m=+781.691345958" watchObservedRunningTime="2026-01-31 15:07:41.945713543 +0000 UTC m=+781.700451856" Jan 31 15:07:43 crc kubenswrapper[4763]: I0131 15:07:43.054761 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6750409-e191-47cd-8abe-bf763a980ed5" path="/var/lib/kubelet/pods/a6750409-e191-47cd-8abe-bf763a980ed5/volumes" Jan 31 15:07:44 crc kubenswrapper[4763]: I0131 15:07:44.177892 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:07:44 crc kubenswrapper[4763]: I0131 15:07:44.178387 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:07:44 crc kubenswrapper[4763]: I0131 15:07:44.178537 4763 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 15:07:44 crc kubenswrapper[4763]: I0131 15:07:44.179806 4763 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b6da57a4479d9d9e6a720f55c269fed96eb09e97fe8b846af0fb3bb3cfb46085"} pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 15:07:44 crc kubenswrapper[4763]: I0131 15:07:44.179977 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" containerID="cri-o://b6da57a4479d9d9e6a720f55c269fed96eb09e97fe8b846af0fb3bb3cfb46085" gracePeriod=600 Jan 31 15:07:44 crc kubenswrapper[4763]: I0131 15:07:44.945777 4763 generic.go:334] "Generic (PLEG): container finished" podID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerID="b6da57a4479d9d9e6a720f55c269fed96eb09e97fe8b846af0fb3bb3cfb46085" exitCode=0 Jan 31 15:07:44 crc kubenswrapper[4763]: I0131 15:07:44.945838 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerDied","Data":"b6da57a4479d9d9e6a720f55c269fed96eb09e97fe8b846af0fb3bb3cfb46085"} Jan 31 15:07:44 crc kubenswrapper[4763]: I0131 15:07:44.946398 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerStarted","Data":"b85e19e54b5edf80a14f0fece0b4788a8867e6919fb49643606ef0f2d6f8bd3e"} Jan 31 15:07:44 crc kubenswrapper[4763]: I0131 15:07:44.946458 4763 scope.go:117] "RemoveContainer" containerID="9efbc169d52bc3e7c2ea07dab22d5bc7a445634e3b2db84476ba82a91a9cf629" Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.013097 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4g6dt"] Jan 31 15:07:50 crc kubenswrapper[4763]: E0131 15:07:50.014314 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6750409-e191-47cd-8abe-bf763a980ed5" containerName="registry-server" Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.014346 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6750409-e191-47cd-8abe-bf763a980ed5" containerName="registry-server" Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.014585 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6750409-e191-47cd-8abe-bf763a980ed5" containerName="registry-server" Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.015993 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4g6dt" Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.047499 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4g6dt"] Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.144359 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/011bc840-ca03-452f-8b2c-c3a8181b1883-catalog-content\") pod \"redhat-marketplace-4g6dt\" (UID: \"011bc840-ca03-452f-8b2c-c3a8181b1883\") " pod="openshift-marketplace/redhat-marketplace-4g6dt" Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.144847 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/011bc840-ca03-452f-8b2c-c3a8181b1883-utilities\") pod \"redhat-marketplace-4g6dt\" (UID: \"011bc840-ca03-452f-8b2c-c3a8181b1883\") " pod="openshift-marketplace/redhat-marketplace-4g6dt" Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.144924 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9lkp\" (UniqueName: \"kubernetes.io/projected/011bc840-ca03-452f-8b2c-c3a8181b1883-kube-api-access-k9lkp\") pod \"redhat-marketplace-4g6dt\" (UID: \"011bc840-ca03-452f-8b2c-c3a8181b1883\") " pod="openshift-marketplace/redhat-marketplace-4g6dt" Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.246352 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/011bc840-ca03-452f-8b2c-c3a8181b1883-catalog-content\") pod \"redhat-marketplace-4g6dt\" (UID: \"011bc840-ca03-452f-8b2c-c3a8181b1883\") " pod="openshift-marketplace/redhat-marketplace-4g6dt" Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.246423 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/011bc840-ca03-452f-8b2c-c3a8181b1883-utilities\") pod \"redhat-marketplace-4g6dt\" (UID: \"011bc840-ca03-452f-8b2c-c3a8181b1883\") " pod="openshift-marketplace/redhat-marketplace-4g6dt" Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.246472 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9lkp\" (UniqueName: \"kubernetes.io/projected/011bc840-ca03-452f-8b2c-c3a8181b1883-kube-api-access-k9lkp\") pod \"redhat-marketplace-4g6dt\" (UID: \"011bc840-ca03-452f-8b2c-c3a8181b1883\") " pod="openshift-marketplace/redhat-marketplace-4g6dt" Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.246911 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/011bc840-ca03-452f-8b2c-c3a8181b1883-catalog-content\") pod \"redhat-marketplace-4g6dt\" (UID: \"011bc840-ca03-452f-8b2c-c3a8181b1883\") " pod="openshift-marketplace/redhat-marketplace-4g6dt" Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.247020 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/011bc840-ca03-452f-8b2c-c3a8181b1883-utilities\") pod \"redhat-marketplace-4g6dt\" (UID: \"011bc840-ca03-452f-8b2c-c3a8181b1883\") " pod="openshift-marketplace/redhat-marketplace-4g6dt" Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.271323 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9lkp\" (UniqueName: \"kubernetes.io/projected/011bc840-ca03-452f-8b2c-c3a8181b1883-kube-api-access-k9lkp\") pod \"redhat-marketplace-4g6dt\" (UID: \"011bc840-ca03-452f-8b2c-c3a8181b1883\") " pod="openshift-marketplace/redhat-marketplace-4g6dt" Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.344555 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4g6dt" Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.355192 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/mariadb-operator-index-d2rtv" Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.355258 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-index-d2rtv" Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.400539 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/mariadb-operator-index-d2rtv" Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.784906 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4g6dt"] Jan 31 15:07:50 crc kubenswrapper[4763]: W0131 15:07:50.795131 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod011bc840_ca03_452f_8b2c_c3a8181b1883.slice/crio-75c29313c29b71da356103a384a220d160f8a9d79b190e012c6178a65b81aa01 WatchSource:0}: Error finding container 75c29313c29b71da356103a384a220d160f8a9d79b190e012c6178a65b81aa01: Status 404 returned error can't find the container with id 75c29313c29b71da356103a384a220d160f8a9d79b190e012c6178a65b81aa01 Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.996873 4763 generic.go:334] "Generic (PLEG): container finished" podID="011bc840-ca03-452f-8b2c-c3a8181b1883" containerID="2557c8c87272deb809928ef17363c9bc7c0be1cebc439390a5e2451c131aa318" exitCode=0 Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.997301 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4g6dt" event={"ID":"011bc840-ca03-452f-8b2c-c3a8181b1883","Type":"ContainerDied","Data":"2557c8c87272deb809928ef17363c9bc7c0be1cebc439390a5e2451c131aa318"} Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.997384 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4g6dt" event={"ID":"011bc840-ca03-452f-8b2c-c3a8181b1883","Type":"ContainerStarted","Data":"75c29313c29b71da356103a384a220d160f8a9d79b190e012c6178a65b81aa01"} Jan 31 15:07:51 crc kubenswrapper[4763]: I0131 15:07:51.053169 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-index-d2rtv" Jan 31 15:07:52 crc kubenswrapper[4763]: I0131 15:07:52.004093 4763 generic.go:334] "Generic (PLEG): container finished" podID="011bc840-ca03-452f-8b2c-c3a8181b1883" containerID="252b3a8b79e6b56f6d936f158ea79c8e17ba76cd9946a063ceda22a8679ff9b5" exitCode=0 Jan 31 15:07:52 crc kubenswrapper[4763]: I0131 15:07:52.004194 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4g6dt" event={"ID":"011bc840-ca03-452f-8b2c-c3a8181b1883","Type":"ContainerDied","Data":"252b3a8b79e6b56f6d936f158ea79c8e17ba76cd9946a063ceda22a8679ff9b5"} Jan 31 15:07:53 crc kubenswrapper[4763]: I0131 15:07:53.015207 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4g6dt" event={"ID":"011bc840-ca03-452f-8b2c-c3a8181b1883","Type":"ContainerStarted","Data":"2c24f7b7e8f9b08d8dfba26f6f61a83e4bdc432001584ff29c5bc2937f2b9c89"} Jan 31 15:07:53 crc kubenswrapper[4763]: I0131 15:07:53.051791 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4g6dt" podStartSLOduration=2.6130294320000003 podStartE2EDuration="4.05176434s" podCreationTimestamp="2026-01-31 15:07:49 +0000 UTC" firstStartedPulling="2026-01-31 15:07:50.999818691 +0000 UTC m=+790.754557024" lastFinishedPulling="2026-01-31 15:07:52.438553599 +0000 UTC m=+792.193291932" observedRunningTime="2026-01-31 15:07:53.042905758 +0000 UTC m=+792.797644091" watchObservedRunningTime="2026-01-31 15:07:53.05176434 +0000 UTC m=+792.806502673" Jan 31 15:07:58 crc kubenswrapper[4763]: I0131 15:07:58.880004 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s"] Jan 31 15:07:58 crc kubenswrapper[4763]: I0131 15:07:58.882800 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s" Jan 31 15:07:58 crc kubenswrapper[4763]: I0131 15:07:58.885894 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-rrv7w" Jan 31 15:07:58 crc kubenswrapper[4763]: I0131 15:07:58.888471 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s"] Jan 31 15:07:58 crc kubenswrapper[4763]: I0131 15:07:58.978088 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2877l\" (UniqueName: \"kubernetes.io/projected/50493718-9240-44a6-bb1a-4c6c97473f2d-kube-api-access-2877l\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s\" (UID: \"50493718-9240-44a6-bb1a-4c6c97473f2d\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s" Jan 31 15:07:58 crc kubenswrapper[4763]: I0131 15:07:58.978205 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50493718-9240-44a6-bb1a-4c6c97473f2d-util\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s\" (UID: \"50493718-9240-44a6-bb1a-4c6c97473f2d\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s" Jan 31 15:07:58 crc kubenswrapper[4763]: I0131 15:07:58.978357 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50493718-9240-44a6-bb1a-4c6c97473f2d-bundle\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s\" (UID: \"50493718-9240-44a6-bb1a-4c6c97473f2d\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s" Jan 31 15:07:59 crc kubenswrapper[4763]: I0131 15:07:59.080401 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50493718-9240-44a6-bb1a-4c6c97473f2d-util\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s\" (UID: \"50493718-9240-44a6-bb1a-4c6c97473f2d\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s" Jan 31 15:07:59 crc kubenswrapper[4763]: I0131 15:07:59.080611 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50493718-9240-44a6-bb1a-4c6c97473f2d-bundle\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s\" (UID: \"50493718-9240-44a6-bb1a-4c6c97473f2d\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s" Jan 31 15:07:59 crc kubenswrapper[4763]: I0131 15:07:59.080731 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2877l\" (UniqueName: \"kubernetes.io/projected/50493718-9240-44a6-bb1a-4c6c97473f2d-kube-api-access-2877l\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s\" (UID: \"50493718-9240-44a6-bb1a-4c6c97473f2d\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s" Jan 31 15:07:59 crc kubenswrapper[4763]: I0131 15:07:59.082135 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50493718-9240-44a6-bb1a-4c6c97473f2d-util\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s\" (UID: \"50493718-9240-44a6-bb1a-4c6c97473f2d\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s" Jan 31 15:07:59 crc kubenswrapper[4763]: I0131 15:07:59.082179 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50493718-9240-44a6-bb1a-4c6c97473f2d-bundle\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s\" (UID: \"50493718-9240-44a6-bb1a-4c6c97473f2d\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s" Jan 31 15:07:59 crc kubenswrapper[4763]: I0131 15:07:59.108299 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2877l\" (UniqueName: \"kubernetes.io/projected/50493718-9240-44a6-bb1a-4c6c97473f2d-kube-api-access-2877l\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s\" (UID: \"50493718-9240-44a6-bb1a-4c6c97473f2d\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s" Jan 31 15:07:59 crc kubenswrapper[4763]: I0131 15:07:59.203823 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s" Jan 31 15:07:59 crc kubenswrapper[4763]: I0131 15:07:59.676481 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s"] Jan 31 15:07:59 crc kubenswrapper[4763]: W0131 15:07:59.684816 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50493718_9240_44a6_bb1a_4c6c97473f2d.slice/crio-3a7d88da2611735de5486b2a7aaf488fe7772e27dcd69092990e2e041934dfad WatchSource:0}: Error finding container 3a7d88da2611735de5486b2a7aaf488fe7772e27dcd69092990e2e041934dfad: Status 404 returned error can't find the container with id 3a7d88da2611735de5486b2a7aaf488fe7772e27dcd69092990e2e041934dfad Jan 31 15:08:00 crc kubenswrapper[4763]: I0131 15:08:00.079380 4763 generic.go:334] "Generic (PLEG): container finished" podID="50493718-9240-44a6-bb1a-4c6c97473f2d" containerID="08126f8f531873412f4f33de869d99e48b5d7e549cdb48af5e3d9b963d0ca5f8" exitCode=0 Jan 31 15:08:00 crc kubenswrapper[4763]: I0131 15:08:00.079456 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s" event={"ID":"50493718-9240-44a6-bb1a-4c6c97473f2d","Type":"ContainerDied","Data":"08126f8f531873412f4f33de869d99e48b5d7e549cdb48af5e3d9b963d0ca5f8"} Jan 31 15:08:00 crc kubenswrapper[4763]: I0131 15:08:00.079875 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s" event={"ID":"50493718-9240-44a6-bb1a-4c6c97473f2d","Type":"ContainerStarted","Data":"3a7d88da2611735de5486b2a7aaf488fe7772e27dcd69092990e2e041934dfad"} Jan 31 15:08:00 crc kubenswrapper[4763]: I0131 15:08:00.345163 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4g6dt" Jan 31 15:08:00 crc kubenswrapper[4763]: I0131 15:08:00.345260 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4g6dt" Jan 31 15:08:00 crc kubenswrapper[4763]: I0131 15:08:00.409966 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4g6dt" Jan 31 15:08:01 crc kubenswrapper[4763]: I0131 15:08:01.088040 4763 generic.go:334] "Generic (PLEG): container finished" podID="50493718-9240-44a6-bb1a-4c6c97473f2d" containerID="d759296e36ed7771bc791581a0938998910bf76fa0dfae164757b2d9c1d5aade" exitCode=0 Jan 31 15:08:01 crc kubenswrapper[4763]: I0131 15:08:01.088094 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s" event={"ID":"50493718-9240-44a6-bb1a-4c6c97473f2d","Type":"ContainerDied","Data":"d759296e36ed7771bc791581a0938998910bf76fa0dfae164757b2d9c1d5aade"} Jan 31 15:08:01 crc kubenswrapper[4763]: E0131 15:08:01.130403 4763 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50493718_9240_44a6_bb1a_4c6c97473f2d.slice/crio-conmon-d759296e36ed7771bc791581a0938998910bf76fa0dfae164757b2d9c1d5aade.scope\": RecentStats: unable to find data in memory cache]" Jan 31 15:08:01 crc kubenswrapper[4763]: I0131 15:08:01.164259 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4g6dt" Jan 31 15:08:02 crc kubenswrapper[4763]: I0131 15:08:02.103332 4763 generic.go:334] "Generic (PLEG): container finished" podID="50493718-9240-44a6-bb1a-4c6c97473f2d" containerID="117782c4f1883d04d30b2babad15c1bb35d694737e3d0ddd44a957df63ad6994" exitCode=0 Jan 31 15:08:02 crc kubenswrapper[4763]: I0131 15:08:02.103610 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s" event={"ID":"50493718-9240-44a6-bb1a-4c6c97473f2d","Type":"ContainerDied","Data":"117782c4f1883d04d30b2babad15c1bb35d694737e3d0ddd44a957df63ad6994"} Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.202347 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4g6dt"] Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.203259 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4g6dt" podUID="011bc840-ca03-452f-8b2c-c3a8181b1883" containerName="registry-server" containerID="cri-o://2c24f7b7e8f9b08d8dfba26f6f61a83e4bdc432001584ff29c5bc2937f2b9c89" gracePeriod=2 Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.541942 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s" Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.643563 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50493718-9240-44a6-bb1a-4c6c97473f2d-bundle\") pod \"50493718-9240-44a6-bb1a-4c6c97473f2d\" (UID: \"50493718-9240-44a6-bb1a-4c6c97473f2d\") " Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.643633 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2877l\" (UniqueName: \"kubernetes.io/projected/50493718-9240-44a6-bb1a-4c6c97473f2d-kube-api-access-2877l\") pod \"50493718-9240-44a6-bb1a-4c6c97473f2d\" (UID: \"50493718-9240-44a6-bb1a-4c6c97473f2d\") " Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.643773 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50493718-9240-44a6-bb1a-4c6c97473f2d-util\") pod \"50493718-9240-44a6-bb1a-4c6c97473f2d\" (UID: \"50493718-9240-44a6-bb1a-4c6c97473f2d\") " Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.650870 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50493718-9240-44a6-bb1a-4c6c97473f2d-kube-api-access-2877l" (OuterVolumeSpecName: "kube-api-access-2877l") pod "50493718-9240-44a6-bb1a-4c6c97473f2d" (UID: "50493718-9240-44a6-bb1a-4c6c97473f2d"). InnerVolumeSpecName "kube-api-access-2877l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.653098 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50493718-9240-44a6-bb1a-4c6c97473f2d-bundle" (OuterVolumeSpecName: "bundle") pod "50493718-9240-44a6-bb1a-4c6c97473f2d" (UID: "50493718-9240-44a6-bb1a-4c6c97473f2d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.658468 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50493718-9240-44a6-bb1a-4c6c97473f2d-util" (OuterVolumeSpecName: "util") pod "50493718-9240-44a6-bb1a-4c6c97473f2d" (UID: "50493718-9240-44a6-bb1a-4c6c97473f2d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.673727 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4g6dt" Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.744992 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9lkp\" (UniqueName: \"kubernetes.io/projected/011bc840-ca03-452f-8b2c-c3a8181b1883-kube-api-access-k9lkp\") pod \"011bc840-ca03-452f-8b2c-c3a8181b1883\" (UID: \"011bc840-ca03-452f-8b2c-c3a8181b1883\") " Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.745047 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/011bc840-ca03-452f-8b2c-c3a8181b1883-utilities\") pod \"011bc840-ca03-452f-8b2c-c3a8181b1883\" (UID: \"011bc840-ca03-452f-8b2c-c3a8181b1883\") " Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.745077 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/011bc840-ca03-452f-8b2c-c3a8181b1883-catalog-content\") pod \"011bc840-ca03-452f-8b2c-c3a8181b1883\" (UID: \"011bc840-ca03-452f-8b2c-c3a8181b1883\") " Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.745242 4763 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50493718-9240-44a6-bb1a-4c6c97473f2d-util\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.745255 4763 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50493718-9240-44a6-bb1a-4c6c97473f2d-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.745273 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2877l\" (UniqueName: \"kubernetes.io/projected/50493718-9240-44a6-bb1a-4c6c97473f2d-kube-api-access-2877l\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.746337 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/011bc840-ca03-452f-8b2c-c3a8181b1883-utilities" (OuterVolumeSpecName: "utilities") pod "011bc840-ca03-452f-8b2c-c3a8181b1883" (UID: "011bc840-ca03-452f-8b2c-c3a8181b1883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.748365 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/011bc840-ca03-452f-8b2c-c3a8181b1883-kube-api-access-k9lkp" (OuterVolumeSpecName: "kube-api-access-k9lkp") pod "011bc840-ca03-452f-8b2c-c3a8181b1883" (UID: "011bc840-ca03-452f-8b2c-c3a8181b1883"). InnerVolumeSpecName "kube-api-access-k9lkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.775404 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/011bc840-ca03-452f-8b2c-c3a8181b1883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "011bc840-ca03-452f-8b2c-c3a8181b1883" (UID: "011bc840-ca03-452f-8b2c-c3a8181b1883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.846332 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9lkp\" (UniqueName: \"kubernetes.io/projected/011bc840-ca03-452f-8b2c-c3a8181b1883-kube-api-access-k9lkp\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.846376 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/011bc840-ca03-452f-8b2c-c3a8181b1883-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.846392 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/011bc840-ca03-452f-8b2c-c3a8181b1883-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:04 crc kubenswrapper[4763]: I0131 15:08:04.118023 4763 generic.go:334] "Generic (PLEG): container finished" podID="011bc840-ca03-452f-8b2c-c3a8181b1883" containerID="2c24f7b7e8f9b08d8dfba26f6f61a83e4bdc432001584ff29c5bc2937f2b9c89" exitCode=0 Jan 31 15:08:04 crc kubenswrapper[4763]: I0131 15:08:04.118067 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4g6dt" Jan 31 15:08:04 crc kubenswrapper[4763]: I0131 15:08:04.118086 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4g6dt" event={"ID":"011bc840-ca03-452f-8b2c-c3a8181b1883","Type":"ContainerDied","Data":"2c24f7b7e8f9b08d8dfba26f6f61a83e4bdc432001584ff29c5bc2937f2b9c89"} Jan 31 15:08:04 crc kubenswrapper[4763]: I0131 15:08:04.118111 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4g6dt" event={"ID":"011bc840-ca03-452f-8b2c-c3a8181b1883","Type":"ContainerDied","Data":"75c29313c29b71da356103a384a220d160f8a9d79b190e012c6178a65b81aa01"} Jan 31 15:08:04 crc kubenswrapper[4763]: I0131 15:08:04.118128 4763 scope.go:117] "RemoveContainer" containerID="2c24f7b7e8f9b08d8dfba26f6f61a83e4bdc432001584ff29c5bc2937f2b9c89" Jan 31 15:08:04 crc kubenswrapper[4763]: I0131 15:08:04.124271 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s" event={"ID":"50493718-9240-44a6-bb1a-4c6c97473f2d","Type":"ContainerDied","Data":"3a7d88da2611735de5486b2a7aaf488fe7772e27dcd69092990e2e041934dfad"} Jan 31 15:08:04 crc kubenswrapper[4763]: I0131 15:08:04.124332 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a7d88da2611735de5486b2a7aaf488fe7772e27dcd69092990e2e041934dfad" Jan 31 15:08:04 crc kubenswrapper[4763]: I0131 15:08:04.124291 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s" Jan 31 15:08:04 crc kubenswrapper[4763]: I0131 15:08:04.149683 4763 scope.go:117] "RemoveContainer" containerID="252b3a8b79e6b56f6d936f158ea79c8e17ba76cd9946a063ceda22a8679ff9b5" Jan 31 15:08:04 crc kubenswrapper[4763]: I0131 15:08:04.152216 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4g6dt"] Jan 31 15:08:04 crc kubenswrapper[4763]: I0131 15:08:04.158075 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4g6dt"] Jan 31 15:08:04 crc kubenswrapper[4763]: I0131 15:08:04.183810 4763 scope.go:117] "RemoveContainer" containerID="2557c8c87272deb809928ef17363c9bc7c0be1cebc439390a5e2451c131aa318" Jan 31 15:08:04 crc kubenswrapper[4763]: I0131 15:08:04.209163 4763 scope.go:117] "RemoveContainer" containerID="2c24f7b7e8f9b08d8dfba26f6f61a83e4bdc432001584ff29c5bc2937f2b9c89" Jan 31 15:08:04 crc kubenswrapper[4763]: E0131 15:08:04.209636 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c24f7b7e8f9b08d8dfba26f6f61a83e4bdc432001584ff29c5bc2937f2b9c89\": container with ID starting with 2c24f7b7e8f9b08d8dfba26f6f61a83e4bdc432001584ff29c5bc2937f2b9c89 not found: ID does not exist" containerID="2c24f7b7e8f9b08d8dfba26f6f61a83e4bdc432001584ff29c5bc2937f2b9c89" Jan 31 15:08:04 crc kubenswrapper[4763]: I0131 15:08:04.209667 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c24f7b7e8f9b08d8dfba26f6f61a83e4bdc432001584ff29c5bc2937f2b9c89"} err="failed to get container status \"2c24f7b7e8f9b08d8dfba26f6f61a83e4bdc432001584ff29c5bc2937f2b9c89\": rpc error: code = NotFound desc = could not find container \"2c24f7b7e8f9b08d8dfba26f6f61a83e4bdc432001584ff29c5bc2937f2b9c89\": container with ID starting with 2c24f7b7e8f9b08d8dfba26f6f61a83e4bdc432001584ff29c5bc2937f2b9c89 not found: ID does not exist" Jan 31 15:08:04 crc kubenswrapper[4763]: I0131 15:08:04.209717 4763 scope.go:117] "RemoveContainer" containerID="252b3a8b79e6b56f6d936f158ea79c8e17ba76cd9946a063ceda22a8679ff9b5" Jan 31 15:08:04 crc kubenswrapper[4763]: E0131 15:08:04.210082 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"252b3a8b79e6b56f6d936f158ea79c8e17ba76cd9946a063ceda22a8679ff9b5\": container with ID starting with 252b3a8b79e6b56f6d936f158ea79c8e17ba76cd9946a063ceda22a8679ff9b5 not found: ID does not exist" containerID="252b3a8b79e6b56f6d936f158ea79c8e17ba76cd9946a063ceda22a8679ff9b5" Jan 31 15:08:04 crc kubenswrapper[4763]: I0131 15:08:04.210109 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"252b3a8b79e6b56f6d936f158ea79c8e17ba76cd9946a063ceda22a8679ff9b5"} err="failed to get container status \"252b3a8b79e6b56f6d936f158ea79c8e17ba76cd9946a063ceda22a8679ff9b5\": rpc error: code = NotFound desc = could not find container \"252b3a8b79e6b56f6d936f158ea79c8e17ba76cd9946a063ceda22a8679ff9b5\": container with ID starting with 252b3a8b79e6b56f6d936f158ea79c8e17ba76cd9946a063ceda22a8679ff9b5 not found: ID does not exist" Jan 31 15:08:04 crc kubenswrapper[4763]: I0131 15:08:04.210126 4763 scope.go:117] "RemoveContainer" containerID="2557c8c87272deb809928ef17363c9bc7c0be1cebc439390a5e2451c131aa318" Jan 31 15:08:04 crc kubenswrapper[4763]: E0131 15:08:04.210352 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2557c8c87272deb809928ef17363c9bc7c0be1cebc439390a5e2451c131aa318\": container with ID starting with 2557c8c87272deb809928ef17363c9bc7c0be1cebc439390a5e2451c131aa318 not found: ID does not exist" containerID="2557c8c87272deb809928ef17363c9bc7c0be1cebc439390a5e2451c131aa318" Jan 31 15:08:04 crc kubenswrapper[4763]: I0131 15:08:04.210375 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2557c8c87272deb809928ef17363c9bc7c0be1cebc439390a5e2451c131aa318"} err="failed to get container status \"2557c8c87272deb809928ef17363c9bc7c0be1cebc439390a5e2451c131aa318\": rpc error: code = NotFound desc = could not find container \"2557c8c87272deb809928ef17363c9bc7c0be1cebc439390a5e2451c131aa318\": container with ID starting with 2557c8c87272deb809928ef17363c9bc7c0be1cebc439390a5e2451c131aa318 not found: ID does not exist" Jan 31 15:08:05 crc kubenswrapper[4763]: I0131 15:08:05.069067 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="011bc840-ca03-452f-8b2c-c3a8181b1883" path="/var/lib/kubelet/pods/011bc840-ca03-452f-8b2c-c3a8181b1883/volumes" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.576370 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-68956c85f5-mrnqc"] Jan 31 15:08:12 crc kubenswrapper[4763]: E0131 15:08:12.577141 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50493718-9240-44a6-bb1a-4c6c97473f2d" containerName="pull" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.577158 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="50493718-9240-44a6-bb1a-4c6c97473f2d" containerName="pull" Jan 31 15:08:12 crc kubenswrapper[4763]: E0131 15:08:12.577177 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="011bc840-ca03-452f-8b2c-c3a8181b1883" containerName="extract-utilities" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.577185 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="011bc840-ca03-452f-8b2c-c3a8181b1883" containerName="extract-utilities" Jan 31 15:08:12 crc kubenswrapper[4763]: E0131 15:08:12.577194 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50493718-9240-44a6-bb1a-4c6c97473f2d" containerName="extract" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.577201 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="50493718-9240-44a6-bb1a-4c6c97473f2d" containerName="extract" Jan 31 15:08:12 crc kubenswrapper[4763]: E0131 15:08:12.577217 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50493718-9240-44a6-bb1a-4c6c97473f2d" containerName="util" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.577223 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="50493718-9240-44a6-bb1a-4c6c97473f2d" containerName="util" Jan 31 15:08:12 crc kubenswrapper[4763]: E0131 15:08:12.577237 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="011bc840-ca03-452f-8b2c-c3a8181b1883" containerName="extract-content" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.577243 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="011bc840-ca03-452f-8b2c-c3a8181b1883" containerName="extract-content" Jan 31 15:08:12 crc kubenswrapper[4763]: E0131 15:08:12.577251 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="011bc840-ca03-452f-8b2c-c3a8181b1883" containerName="registry-server" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.577258 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="011bc840-ca03-452f-8b2c-c3a8181b1883" containerName="registry-server" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.577360 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="50493718-9240-44a6-bb1a-4c6c97473f2d" containerName="extract" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.577377 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="011bc840-ca03-452f-8b2c-c3a8181b1883" containerName="registry-server" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.577851 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-68956c85f5-mrnqc" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.579916 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-service-cert" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.580333 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-kqhgp" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.581820 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.600917 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-68956c85f5-mrnqc"] Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.776498 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5kn2\" (UniqueName: \"kubernetes.io/projected/30bcffc2-0054-475e-af66-74b73ec95edb-kube-api-access-w5kn2\") pod \"mariadb-operator-controller-manager-68956c85f5-mrnqc\" (UID: \"30bcffc2-0054-475e-af66-74b73ec95edb\") " pod="openstack-operators/mariadb-operator-controller-manager-68956c85f5-mrnqc" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.776857 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/30bcffc2-0054-475e-af66-74b73ec95edb-webhook-cert\") pod \"mariadb-operator-controller-manager-68956c85f5-mrnqc\" (UID: \"30bcffc2-0054-475e-af66-74b73ec95edb\") " pod="openstack-operators/mariadb-operator-controller-manager-68956c85f5-mrnqc" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.776992 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/30bcffc2-0054-475e-af66-74b73ec95edb-apiservice-cert\") pod \"mariadb-operator-controller-manager-68956c85f5-mrnqc\" (UID: \"30bcffc2-0054-475e-af66-74b73ec95edb\") " pod="openstack-operators/mariadb-operator-controller-manager-68956c85f5-mrnqc" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.878346 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5kn2\" (UniqueName: \"kubernetes.io/projected/30bcffc2-0054-475e-af66-74b73ec95edb-kube-api-access-w5kn2\") pod \"mariadb-operator-controller-manager-68956c85f5-mrnqc\" (UID: \"30bcffc2-0054-475e-af66-74b73ec95edb\") " pod="openstack-operators/mariadb-operator-controller-manager-68956c85f5-mrnqc" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.878445 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/30bcffc2-0054-475e-af66-74b73ec95edb-webhook-cert\") pod \"mariadb-operator-controller-manager-68956c85f5-mrnqc\" (UID: \"30bcffc2-0054-475e-af66-74b73ec95edb\") " pod="openstack-operators/mariadb-operator-controller-manager-68956c85f5-mrnqc" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.878479 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/30bcffc2-0054-475e-af66-74b73ec95edb-apiservice-cert\") pod \"mariadb-operator-controller-manager-68956c85f5-mrnqc\" (UID: \"30bcffc2-0054-475e-af66-74b73ec95edb\") " pod="openstack-operators/mariadb-operator-controller-manager-68956c85f5-mrnqc" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.883631 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/30bcffc2-0054-475e-af66-74b73ec95edb-webhook-cert\") pod \"mariadb-operator-controller-manager-68956c85f5-mrnqc\" (UID: \"30bcffc2-0054-475e-af66-74b73ec95edb\") " pod="openstack-operators/mariadb-operator-controller-manager-68956c85f5-mrnqc" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.883745 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/30bcffc2-0054-475e-af66-74b73ec95edb-apiservice-cert\") pod \"mariadb-operator-controller-manager-68956c85f5-mrnqc\" (UID: \"30bcffc2-0054-475e-af66-74b73ec95edb\") " pod="openstack-operators/mariadb-operator-controller-manager-68956c85f5-mrnqc" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.898004 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5kn2\" (UniqueName: \"kubernetes.io/projected/30bcffc2-0054-475e-af66-74b73ec95edb-kube-api-access-w5kn2\") pod \"mariadb-operator-controller-manager-68956c85f5-mrnqc\" (UID: \"30bcffc2-0054-475e-af66-74b73ec95edb\") " pod="openstack-operators/mariadb-operator-controller-manager-68956c85f5-mrnqc" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.903640 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-68956c85f5-mrnqc" Jan 31 15:08:13 crc kubenswrapper[4763]: I0131 15:08:13.083611 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-68956c85f5-mrnqc"] Jan 31 15:08:13 crc kubenswrapper[4763]: I0131 15:08:13.195330 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-68956c85f5-mrnqc" event={"ID":"30bcffc2-0054-475e-af66-74b73ec95edb","Type":"ContainerStarted","Data":"9f7efbda8e0a57f609653c2420d19b62db36d50793cbce07101d503a155356a3"} Jan 31 15:08:17 crc kubenswrapper[4763]: I0131 15:08:17.224379 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-68956c85f5-mrnqc" event={"ID":"30bcffc2-0054-475e-af66-74b73ec95edb","Type":"ContainerStarted","Data":"f17088523d5fc7e22b3cc161ca398fe9a451ef81ebcb110275568dee2b3c5dbd"} Jan 31 15:08:17 crc kubenswrapper[4763]: I0131 15:08:17.224838 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-68956c85f5-mrnqc" Jan 31 15:08:17 crc kubenswrapper[4763]: I0131 15:08:17.247727 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-68956c85f5-mrnqc" podStartSLOduration=1.4888131279999999 podStartE2EDuration="5.247707347s" podCreationTimestamp="2026-01-31 15:08:12 +0000 UTC" firstStartedPulling="2026-01-31 15:08:13.089744769 +0000 UTC m=+812.844483062" lastFinishedPulling="2026-01-31 15:08:16.848638968 +0000 UTC m=+816.603377281" observedRunningTime="2026-01-31 15:08:17.243315691 +0000 UTC m=+816.998053984" watchObservedRunningTime="2026-01-31 15:08:17.247707347 +0000 UTC m=+817.002445650" Jan 31 15:08:22 crc kubenswrapper[4763]: I0131 15:08:22.909127 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-68956c85f5-mrnqc" Jan 31 15:08:25 crc kubenswrapper[4763]: I0131 15:08:25.935512 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-index-9vcjd"] Jan 31 15:08:25 crc kubenswrapper[4763]: I0131 15:08:25.937730 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-9vcjd" Jan 31 15:08:25 crc kubenswrapper[4763]: I0131 15:08:25.942030 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-9vcjd"] Jan 31 15:08:25 crc kubenswrapper[4763]: I0131 15:08:25.942152 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-index-dockercfg-gb9xs" Jan 31 15:08:26 crc kubenswrapper[4763]: I0131 15:08:26.062785 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxsng\" (UniqueName: \"kubernetes.io/projected/df73235a-c7ce-449c-b163-341974166624-kube-api-access-mxsng\") pod \"infra-operator-index-9vcjd\" (UID: \"df73235a-c7ce-449c-b163-341974166624\") " pod="openstack-operators/infra-operator-index-9vcjd" Jan 31 15:08:26 crc kubenswrapper[4763]: I0131 15:08:26.164354 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxsng\" (UniqueName: \"kubernetes.io/projected/df73235a-c7ce-449c-b163-341974166624-kube-api-access-mxsng\") pod \"infra-operator-index-9vcjd\" (UID: \"df73235a-c7ce-449c-b163-341974166624\") " pod="openstack-operators/infra-operator-index-9vcjd" Jan 31 15:08:26 crc kubenswrapper[4763]: I0131 15:08:26.200265 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxsng\" (UniqueName: \"kubernetes.io/projected/df73235a-c7ce-449c-b163-341974166624-kube-api-access-mxsng\") pod \"infra-operator-index-9vcjd\" (UID: \"df73235a-c7ce-449c-b163-341974166624\") " pod="openstack-operators/infra-operator-index-9vcjd" Jan 31 15:08:26 crc kubenswrapper[4763]: I0131 15:08:26.254383 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-9vcjd" Jan 31 15:08:26 crc kubenswrapper[4763]: I0131 15:08:26.712753 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-9vcjd"] Jan 31 15:08:26 crc kubenswrapper[4763]: W0131 15:08:26.717301 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf73235a_c7ce_449c_b163_341974166624.slice/crio-2dd2e004f215046371ac43351b1bc973eb8b0334a1fb4966c1beea11626d7165 WatchSource:0}: Error finding container 2dd2e004f215046371ac43351b1bc973eb8b0334a1fb4966c1beea11626d7165: Status 404 returned error can't find the container with id 2dd2e004f215046371ac43351b1bc973eb8b0334a1fb4966c1beea11626d7165 Jan 31 15:08:27 crc kubenswrapper[4763]: I0131 15:08:27.289428 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-9vcjd" event={"ID":"df73235a-c7ce-449c-b163-341974166624","Type":"ContainerStarted","Data":"2dd2e004f215046371ac43351b1bc973eb8b0334a1fb4966c1beea11626d7165"} Jan 31 15:08:28 crc kubenswrapper[4763]: I0131 15:08:28.298366 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-9vcjd" event={"ID":"df73235a-c7ce-449c-b163-341974166624","Type":"ContainerStarted","Data":"ee320b2429f4044596e0f420ac4cfc8e847433b8e04df275225c7c5c65b706de"} Jan 31 15:08:28 crc kubenswrapper[4763]: I0131 15:08:28.328370 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-index-9vcjd" podStartSLOduration=2.470410388 podStartE2EDuration="3.328336055s" podCreationTimestamp="2026-01-31 15:08:25 +0000 UTC" firstStartedPulling="2026-01-31 15:08:26.721427142 +0000 UTC m=+826.476165465" lastFinishedPulling="2026-01-31 15:08:27.579352829 +0000 UTC m=+827.334091132" observedRunningTime="2026-01-31 15:08:28.320089929 +0000 UTC m=+828.074828262" watchObservedRunningTime="2026-01-31 15:08:28.328336055 +0000 UTC m=+828.083074388" Jan 31 15:08:36 crc kubenswrapper[4763]: I0131 15:08:36.255550 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-index-9vcjd" Jan 31 15:08:36 crc kubenswrapper[4763]: I0131 15:08:36.256413 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/infra-operator-index-9vcjd" Jan 31 15:08:36 crc kubenswrapper[4763]: I0131 15:08:36.301991 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/infra-operator-index-9vcjd" Jan 31 15:08:36 crc kubenswrapper[4763]: I0131 15:08:36.400410 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-index-9vcjd" Jan 31 15:08:45 crc kubenswrapper[4763]: I0131 15:08:45.195112 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb"] Jan 31 15:08:45 crc kubenswrapper[4763]: I0131 15:08:45.197194 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb" Jan 31 15:08:45 crc kubenswrapper[4763]: I0131 15:08:45.199936 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-rrv7w" Jan 31 15:08:45 crc kubenswrapper[4763]: I0131 15:08:45.210176 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb"] Jan 31 15:08:45 crc kubenswrapper[4763]: I0131 15:08:45.336127 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvxcd\" (UniqueName: \"kubernetes.io/projected/3636515d-8655-48d7-b0f6-54e4c6635f1c-kube-api-access-nvxcd\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb\" (UID: \"3636515d-8655-48d7-b0f6-54e4c6635f1c\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb" Jan 31 15:08:45 crc kubenswrapper[4763]: I0131 15:08:45.336414 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3636515d-8655-48d7-b0f6-54e4c6635f1c-util\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb\" (UID: \"3636515d-8655-48d7-b0f6-54e4c6635f1c\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb" Jan 31 15:08:45 crc kubenswrapper[4763]: I0131 15:08:45.336783 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3636515d-8655-48d7-b0f6-54e4c6635f1c-bundle\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb\" (UID: \"3636515d-8655-48d7-b0f6-54e4c6635f1c\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb" Jan 31 15:08:45 crc kubenswrapper[4763]: I0131 15:08:45.438068 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3636515d-8655-48d7-b0f6-54e4c6635f1c-bundle\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb\" (UID: \"3636515d-8655-48d7-b0f6-54e4c6635f1c\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb" Jan 31 15:08:45 crc kubenswrapper[4763]: I0131 15:08:45.438179 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvxcd\" (UniqueName: \"kubernetes.io/projected/3636515d-8655-48d7-b0f6-54e4c6635f1c-kube-api-access-nvxcd\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb\" (UID: \"3636515d-8655-48d7-b0f6-54e4c6635f1c\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb" Jan 31 15:08:45 crc kubenswrapper[4763]: I0131 15:08:45.438222 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3636515d-8655-48d7-b0f6-54e4c6635f1c-util\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb\" (UID: \"3636515d-8655-48d7-b0f6-54e4c6635f1c\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb" Jan 31 15:08:45 crc kubenswrapper[4763]: I0131 15:08:45.439145 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3636515d-8655-48d7-b0f6-54e4c6635f1c-util\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb\" (UID: \"3636515d-8655-48d7-b0f6-54e4c6635f1c\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb" Jan 31 15:08:45 crc kubenswrapper[4763]: I0131 15:08:45.439485 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3636515d-8655-48d7-b0f6-54e4c6635f1c-bundle\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb\" (UID: \"3636515d-8655-48d7-b0f6-54e4c6635f1c\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb" Jan 31 15:08:45 crc kubenswrapper[4763]: I0131 15:08:45.472800 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvxcd\" (UniqueName: \"kubernetes.io/projected/3636515d-8655-48d7-b0f6-54e4c6635f1c-kube-api-access-nvxcd\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb\" (UID: \"3636515d-8655-48d7-b0f6-54e4c6635f1c\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb" Jan 31 15:08:45 crc kubenswrapper[4763]: I0131 15:08:45.523689 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb" Jan 31 15:08:45 crc kubenswrapper[4763]: I0131 15:08:45.784423 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb"] Jan 31 15:08:45 crc kubenswrapper[4763]: W0131 15:08:45.786281 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3636515d_8655_48d7_b0f6_54e4c6635f1c.slice/crio-e8dac8b58b5f7a60b8a575b9164ec12af0cabffd3ab35eb88d623b326f00a072 WatchSource:0}: Error finding container e8dac8b58b5f7a60b8a575b9164ec12af0cabffd3ab35eb88d623b326f00a072: Status 404 returned error can't find the container with id e8dac8b58b5f7a60b8a575b9164ec12af0cabffd3ab35eb88d623b326f00a072 Jan 31 15:08:46 crc kubenswrapper[4763]: I0131 15:08:46.437634 4763 generic.go:334] "Generic (PLEG): container finished" podID="3636515d-8655-48d7-b0f6-54e4c6635f1c" containerID="ed4e2126766da63c0e7e50df98b96eda794a2808f23cd3ea8177f76b4ede5721" exitCode=0 Jan 31 15:08:46 crc kubenswrapper[4763]: I0131 15:08:46.437677 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb" event={"ID":"3636515d-8655-48d7-b0f6-54e4c6635f1c","Type":"ContainerDied","Data":"ed4e2126766da63c0e7e50df98b96eda794a2808f23cd3ea8177f76b4ede5721"} Jan 31 15:08:46 crc kubenswrapper[4763]: I0131 15:08:46.437757 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb" event={"ID":"3636515d-8655-48d7-b0f6-54e4c6635f1c","Type":"ContainerStarted","Data":"e8dac8b58b5f7a60b8a575b9164ec12af0cabffd3ab35eb88d623b326f00a072"} Jan 31 15:08:48 crc kubenswrapper[4763]: I0131 15:08:48.486012 4763 generic.go:334] "Generic (PLEG): container finished" podID="3636515d-8655-48d7-b0f6-54e4c6635f1c" containerID="fef703f3e9560fc57c77a16edc2c33c259c56a68ca6dcbdf7cb14f969e6de2c7" exitCode=0 Jan 31 15:08:48 crc kubenswrapper[4763]: I0131 15:08:48.486384 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb" event={"ID":"3636515d-8655-48d7-b0f6-54e4c6635f1c","Type":"ContainerDied","Data":"fef703f3e9560fc57c77a16edc2c33c259c56a68ca6dcbdf7cb14f969e6de2c7"} Jan 31 15:08:49 crc kubenswrapper[4763]: I0131 15:08:49.496768 4763 generic.go:334] "Generic (PLEG): container finished" podID="3636515d-8655-48d7-b0f6-54e4c6635f1c" containerID="4a7ea84e03b7c6a30104c397d809d487a129ecdccd4b68c9144a56a44b18655c" exitCode=0 Jan 31 15:08:49 crc kubenswrapper[4763]: I0131 15:08:49.496863 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb" event={"ID":"3636515d-8655-48d7-b0f6-54e4c6635f1c","Type":"ContainerDied","Data":"4a7ea84e03b7c6a30104c397d809d487a129ecdccd4b68c9144a56a44b18655c"} Jan 31 15:08:50 crc kubenswrapper[4763]: I0131 15:08:50.858132 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb" Jan 31 15:08:51 crc kubenswrapper[4763]: I0131 15:08:51.030964 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3636515d-8655-48d7-b0f6-54e4c6635f1c-bundle\") pod \"3636515d-8655-48d7-b0f6-54e4c6635f1c\" (UID: \"3636515d-8655-48d7-b0f6-54e4c6635f1c\") " Jan 31 15:08:51 crc kubenswrapper[4763]: I0131 15:08:51.031112 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3636515d-8655-48d7-b0f6-54e4c6635f1c-util\") pod \"3636515d-8655-48d7-b0f6-54e4c6635f1c\" (UID: \"3636515d-8655-48d7-b0f6-54e4c6635f1c\") " Jan 31 15:08:51 crc kubenswrapper[4763]: I0131 15:08:51.031152 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvxcd\" (UniqueName: \"kubernetes.io/projected/3636515d-8655-48d7-b0f6-54e4c6635f1c-kube-api-access-nvxcd\") pod \"3636515d-8655-48d7-b0f6-54e4c6635f1c\" (UID: \"3636515d-8655-48d7-b0f6-54e4c6635f1c\") " Jan 31 15:08:51 crc kubenswrapper[4763]: I0131 15:08:51.036145 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3636515d-8655-48d7-b0f6-54e4c6635f1c-bundle" (OuterVolumeSpecName: "bundle") pod "3636515d-8655-48d7-b0f6-54e4c6635f1c" (UID: "3636515d-8655-48d7-b0f6-54e4c6635f1c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:08:51 crc kubenswrapper[4763]: I0131 15:08:51.049802 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3636515d-8655-48d7-b0f6-54e4c6635f1c-kube-api-access-nvxcd" (OuterVolumeSpecName: "kube-api-access-nvxcd") pod "3636515d-8655-48d7-b0f6-54e4c6635f1c" (UID: "3636515d-8655-48d7-b0f6-54e4c6635f1c"). InnerVolumeSpecName "kube-api-access-nvxcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:08:51 crc kubenswrapper[4763]: I0131 15:08:51.059753 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3636515d-8655-48d7-b0f6-54e4c6635f1c-util" (OuterVolumeSpecName: "util") pod "3636515d-8655-48d7-b0f6-54e4c6635f1c" (UID: "3636515d-8655-48d7-b0f6-54e4c6635f1c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:08:51 crc kubenswrapper[4763]: I0131 15:08:51.132809 4763 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3636515d-8655-48d7-b0f6-54e4c6635f1c-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:51 crc kubenswrapper[4763]: I0131 15:08:51.132856 4763 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3636515d-8655-48d7-b0f6-54e4c6635f1c-util\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:51 crc kubenswrapper[4763]: I0131 15:08:51.132876 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvxcd\" (UniqueName: \"kubernetes.io/projected/3636515d-8655-48d7-b0f6-54e4c6635f1c-kube-api-access-nvxcd\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:51 crc kubenswrapper[4763]: I0131 15:08:51.521474 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb" event={"ID":"3636515d-8655-48d7-b0f6-54e4c6635f1c","Type":"ContainerDied","Data":"e8dac8b58b5f7a60b8a575b9164ec12af0cabffd3ab35eb88d623b326f00a072"} Jan 31 15:08:51 crc kubenswrapper[4763]: I0131 15:08:51.521532 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8dac8b58b5f7a60b8a575b9164ec12af0cabffd3ab35eb88d623b326f00a072" Jan 31 15:08:51 crc kubenswrapper[4763]: I0131 15:08:51.521671 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb" Jan 31 15:08:54 crc kubenswrapper[4763]: I0131 15:08:54.743971 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dl6p4"] Jan 31 15:08:54 crc kubenswrapper[4763]: E0131 15:08:54.744542 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3636515d-8655-48d7-b0f6-54e4c6635f1c" containerName="util" Jan 31 15:08:54 crc kubenswrapper[4763]: I0131 15:08:54.744552 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="3636515d-8655-48d7-b0f6-54e4c6635f1c" containerName="util" Jan 31 15:08:54 crc kubenswrapper[4763]: E0131 15:08:54.744564 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3636515d-8655-48d7-b0f6-54e4c6635f1c" containerName="extract" Jan 31 15:08:54 crc kubenswrapper[4763]: I0131 15:08:54.744571 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="3636515d-8655-48d7-b0f6-54e4c6635f1c" containerName="extract" Jan 31 15:08:54 crc kubenswrapper[4763]: E0131 15:08:54.744584 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3636515d-8655-48d7-b0f6-54e4c6635f1c" containerName="pull" Jan 31 15:08:54 crc kubenswrapper[4763]: I0131 15:08:54.744590 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="3636515d-8655-48d7-b0f6-54e4c6635f1c" containerName="pull" Jan 31 15:08:54 crc kubenswrapper[4763]: I0131 15:08:54.744705 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="3636515d-8655-48d7-b0f6-54e4c6635f1c" containerName="extract" Jan 31 15:08:54 crc kubenswrapper[4763]: I0131 15:08:54.745414 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dl6p4" Jan 31 15:08:54 crc kubenswrapper[4763]: I0131 15:08:54.802247 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dl6p4"] Jan 31 15:08:54 crc kubenswrapper[4763]: I0131 15:08:54.881255 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt7xf\" (UniqueName: \"kubernetes.io/projected/73b1db31-195c-41e8-9ab4-6e13e96600fa-kube-api-access-rt7xf\") pod \"certified-operators-dl6p4\" (UID: \"73b1db31-195c-41e8-9ab4-6e13e96600fa\") " pod="openshift-marketplace/certified-operators-dl6p4" Jan 31 15:08:54 crc kubenswrapper[4763]: I0131 15:08:54.881293 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73b1db31-195c-41e8-9ab4-6e13e96600fa-utilities\") pod \"certified-operators-dl6p4\" (UID: \"73b1db31-195c-41e8-9ab4-6e13e96600fa\") " pod="openshift-marketplace/certified-operators-dl6p4" Jan 31 15:08:54 crc kubenswrapper[4763]: I0131 15:08:54.881373 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73b1db31-195c-41e8-9ab4-6e13e96600fa-catalog-content\") pod \"certified-operators-dl6p4\" (UID: \"73b1db31-195c-41e8-9ab4-6e13e96600fa\") " pod="openshift-marketplace/certified-operators-dl6p4" Jan 31 15:08:54 crc kubenswrapper[4763]: I0131 15:08:54.983152 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt7xf\" (UniqueName: \"kubernetes.io/projected/73b1db31-195c-41e8-9ab4-6e13e96600fa-kube-api-access-rt7xf\") pod \"certified-operators-dl6p4\" (UID: \"73b1db31-195c-41e8-9ab4-6e13e96600fa\") " pod="openshift-marketplace/certified-operators-dl6p4" Jan 31 15:08:54 crc kubenswrapper[4763]: I0131 15:08:54.983222 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73b1db31-195c-41e8-9ab4-6e13e96600fa-utilities\") pod \"certified-operators-dl6p4\" (UID: \"73b1db31-195c-41e8-9ab4-6e13e96600fa\") " pod="openshift-marketplace/certified-operators-dl6p4" Jan 31 15:08:54 crc kubenswrapper[4763]: I0131 15:08:54.983329 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73b1db31-195c-41e8-9ab4-6e13e96600fa-catalog-content\") pod \"certified-operators-dl6p4\" (UID: \"73b1db31-195c-41e8-9ab4-6e13e96600fa\") " pod="openshift-marketplace/certified-operators-dl6p4" Jan 31 15:08:54 crc kubenswrapper[4763]: I0131 15:08:54.983750 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73b1db31-195c-41e8-9ab4-6e13e96600fa-utilities\") pod \"certified-operators-dl6p4\" (UID: \"73b1db31-195c-41e8-9ab4-6e13e96600fa\") " pod="openshift-marketplace/certified-operators-dl6p4" Jan 31 15:08:54 crc kubenswrapper[4763]: I0131 15:08:54.983825 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73b1db31-195c-41e8-9ab4-6e13e96600fa-catalog-content\") pod \"certified-operators-dl6p4\" (UID: \"73b1db31-195c-41e8-9ab4-6e13e96600fa\") " pod="openshift-marketplace/certified-operators-dl6p4" Jan 31 15:08:55 crc kubenswrapper[4763]: I0131 15:08:55.017203 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt7xf\" (UniqueName: \"kubernetes.io/projected/73b1db31-195c-41e8-9ab4-6e13e96600fa-kube-api-access-rt7xf\") pod \"certified-operators-dl6p4\" (UID: \"73b1db31-195c-41e8-9ab4-6e13e96600fa\") " pod="openshift-marketplace/certified-operators-dl6p4" Jan 31 15:08:55 crc kubenswrapper[4763]: I0131 15:08:55.098204 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dl6p4" Jan 31 15:08:55 crc kubenswrapper[4763]: I0131 15:08:55.338512 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dl6p4"] Jan 31 15:08:55 crc kubenswrapper[4763]: I0131 15:08:55.343487 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xx6l2"] Jan 31 15:08:55 crc kubenswrapper[4763]: I0131 15:08:55.358811 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xx6l2"] Jan 31 15:08:55 crc kubenswrapper[4763]: I0131 15:08:55.358941 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xx6l2" Jan 31 15:08:55 crc kubenswrapper[4763]: I0131 15:08:55.489357 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1102b46b-1431-4abc-acf3-fc15238c9dec-catalog-content\") pod \"community-operators-xx6l2\" (UID: \"1102b46b-1431-4abc-acf3-fc15238c9dec\") " pod="openshift-marketplace/community-operators-xx6l2" Jan 31 15:08:55 crc kubenswrapper[4763]: I0131 15:08:55.489453 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl6td\" (UniqueName: \"kubernetes.io/projected/1102b46b-1431-4abc-acf3-fc15238c9dec-kube-api-access-fl6td\") pod \"community-operators-xx6l2\" (UID: \"1102b46b-1431-4abc-acf3-fc15238c9dec\") " pod="openshift-marketplace/community-operators-xx6l2" Jan 31 15:08:55 crc kubenswrapper[4763]: I0131 15:08:55.489489 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1102b46b-1431-4abc-acf3-fc15238c9dec-utilities\") pod \"community-operators-xx6l2\" (UID: \"1102b46b-1431-4abc-acf3-fc15238c9dec\") " pod="openshift-marketplace/community-operators-xx6l2" Jan 31 15:08:55 crc kubenswrapper[4763]: I0131 15:08:55.559303 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dl6p4" event={"ID":"73b1db31-195c-41e8-9ab4-6e13e96600fa","Type":"ContainerStarted","Data":"4151039b4f49a7ff99cc6b11471e5efef757127e04cb78f85efb3f28ef0adfce"} Jan 31 15:08:55 crc kubenswrapper[4763]: I0131 15:08:55.559351 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dl6p4" event={"ID":"73b1db31-195c-41e8-9ab4-6e13e96600fa","Type":"ContainerStarted","Data":"1ce5c6e1711277fcfbaf495466b9e5fe5c110ae0edd21c55bf7a8fa1d794558e"} Jan 31 15:08:55 crc kubenswrapper[4763]: I0131 15:08:55.590623 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1102b46b-1431-4abc-acf3-fc15238c9dec-catalog-content\") pod \"community-operators-xx6l2\" (UID: \"1102b46b-1431-4abc-acf3-fc15238c9dec\") " pod="openshift-marketplace/community-operators-xx6l2" Jan 31 15:08:55 crc kubenswrapper[4763]: I0131 15:08:55.591033 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl6td\" (UniqueName: \"kubernetes.io/projected/1102b46b-1431-4abc-acf3-fc15238c9dec-kube-api-access-fl6td\") pod \"community-operators-xx6l2\" (UID: \"1102b46b-1431-4abc-acf3-fc15238c9dec\") " pod="openshift-marketplace/community-operators-xx6l2" Jan 31 15:08:55 crc kubenswrapper[4763]: I0131 15:08:55.591073 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1102b46b-1431-4abc-acf3-fc15238c9dec-utilities\") pod \"community-operators-xx6l2\" (UID: \"1102b46b-1431-4abc-acf3-fc15238c9dec\") " pod="openshift-marketplace/community-operators-xx6l2" Jan 31 15:08:55 crc kubenswrapper[4763]: I0131 15:08:55.591179 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1102b46b-1431-4abc-acf3-fc15238c9dec-catalog-content\") pod \"community-operators-xx6l2\" (UID: \"1102b46b-1431-4abc-acf3-fc15238c9dec\") " pod="openshift-marketplace/community-operators-xx6l2" Jan 31 15:08:55 crc kubenswrapper[4763]: I0131 15:08:55.591459 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1102b46b-1431-4abc-acf3-fc15238c9dec-utilities\") pod \"community-operators-xx6l2\" (UID: \"1102b46b-1431-4abc-acf3-fc15238c9dec\") " pod="openshift-marketplace/community-operators-xx6l2" Jan 31 15:08:55 crc kubenswrapper[4763]: I0131 15:08:55.610491 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl6td\" (UniqueName: \"kubernetes.io/projected/1102b46b-1431-4abc-acf3-fc15238c9dec-kube-api-access-fl6td\") pod \"community-operators-xx6l2\" (UID: \"1102b46b-1431-4abc-acf3-fc15238c9dec\") " pod="openshift-marketplace/community-operators-xx6l2" Jan 31 15:08:55 crc kubenswrapper[4763]: I0131 15:08:55.697792 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xx6l2" Jan 31 15:08:56 crc kubenswrapper[4763]: I0131 15:08:56.149598 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xx6l2"] Jan 31 15:08:56 crc kubenswrapper[4763]: W0131 15:08:56.158218 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1102b46b_1431_4abc_acf3_fc15238c9dec.slice/crio-df2ae68589cdf5f2daa2674b8c6d3f469b0de474439fd474c5f8daf2c17ae296 WatchSource:0}: Error finding container df2ae68589cdf5f2daa2674b8c6d3f469b0de474439fd474c5f8daf2c17ae296: Status 404 returned error can't find the container with id df2ae68589cdf5f2daa2674b8c6d3f469b0de474439fd474c5f8daf2c17ae296 Jan 31 15:08:56 crc kubenswrapper[4763]: I0131 15:08:56.566190 4763 generic.go:334] "Generic (PLEG): container finished" podID="1102b46b-1431-4abc-acf3-fc15238c9dec" containerID="58e640168ef1b75e853e394649bc966d1036d4bb11ab8918c809f9ee7dee4196" exitCode=0 Jan 31 15:08:56 crc kubenswrapper[4763]: I0131 15:08:56.566264 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xx6l2" event={"ID":"1102b46b-1431-4abc-acf3-fc15238c9dec","Type":"ContainerDied","Data":"58e640168ef1b75e853e394649bc966d1036d4bb11ab8918c809f9ee7dee4196"} Jan 31 15:08:56 crc kubenswrapper[4763]: I0131 15:08:56.566580 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xx6l2" event={"ID":"1102b46b-1431-4abc-acf3-fc15238c9dec","Type":"ContainerStarted","Data":"df2ae68589cdf5f2daa2674b8c6d3f469b0de474439fd474c5f8daf2c17ae296"} Jan 31 15:08:56 crc kubenswrapper[4763]: I0131 15:08:56.570022 4763 generic.go:334] "Generic (PLEG): container finished" podID="73b1db31-195c-41e8-9ab4-6e13e96600fa" containerID="4151039b4f49a7ff99cc6b11471e5efef757127e04cb78f85efb3f28ef0adfce" exitCode=0 Jan 31 15:08:56 crc kubenswrapper[4763]: I0131 15:08:56.570071 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dl6p4" event={"ID":"73b1db31-195c-41e8-9ab4-6e13e96600fa","Type":"ContainerDied","Data":"4151039b4f49a7ff99cc6b11471e5efef757127e04cb78f85efb3f28ef0adfce"} Jan 31 15:08:57 crc kubenswrapper[4763]: I0131 15:08:57.576783 4763 generic.go:334] "Generic (PLEG): container finished" podID="73b1db31-195c-41e8-9ab4-6e13e96600fa" containerID="34fbcf0f68f2b4bcc29cd6b0ec98665d1cb2deea5c4fd3a798efa2e8b779ef26" exitCode=0 Jan 31 15:08:57 crc kubenswrapper[4763]: I0131 15:08:57.576935 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dl6p4" event={"ID":"73b1db31-195c-41e8-9ab4-6e13e96600fa","Type":"ContainerDied","Data":"34fbcf0f68f2b4bcc29cd6b0ec98665d1cb2deea5c4fd3a798efa2e8b779ef26"} Jan 31 15:08:57 crc kubenswrapper[4763]: I0131 15:08:57.579774 4763 generic.go:334] "Generic (PLEG): container finished" podID="1102b46b-1431-4abc-acf3-fc15238c9dec" containerID="0c5a179d917112c47df3d672325ac30e6e4efd61885f9377b2ea3e10d6c629b4" exitCode=0 Jan 31 15:08:57 crc kubenswrapper[4763]: I0131 15:08:57.579816 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xx6l2" event={"ID":"1102b46b-1431-4abc-acf3-fc15238c9dec","Type":"ContainerDied","Data":"0c5a179d917112c47df3d672325ac30e6e4efd61885f9377b2ea3e10d6c629b4"} Jan 31 15:08:58 crc kubenswrapper[4763]: I0131 15:08:58.587014 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xx6l2" event={"ID":"1102b46b-1431-4abc-acf3-fc15238c9dec","Type":"ContainerStarted","Data":"23e3fe9f7b231da52853b276574f18bf79feb38b42567138be71d5b85f526157"} Jan 31 15:08:58 crc kubenswrapper[4763]: I0131 15:08:58.589399 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dl6p4" event={"ID":"73b1db31-195c-41e8-9ab4-6e13e96600fa","Type":"ContainerStarted","Data":"3e1577b896f75ca932834cf67f6d200f963de8f29543fd5d0da3c00a8743117b"} Jan 31 15:08:58 crc kubenswrapper[4763]: I0131 15:08:58.602999 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xx6l2" podStartSLOduration=2.224622835 podStartE2EDuration="3.602984067s" podCreationTimestamp="2026-01-31 15:08:55 +0000 UTC" firstStartedPulling="2026-01-31 15:08:56.56786793 +0000 UTC m=+856.322606233" lastFinishedPulling="2026-01-31 15:08:57.946229172 +0000 UTC m=+857.700967465" observedRunningTime="2026-01-31 15:08:58.602152925 +0000 UTC m=+858.356891218" watchObservedRunningTime="2026-01-31 15:08:58.602984067 +0000 UTC m=+858.357722360" Jan 31 15:09:03 crc kubenswrapper[4763]: I0131 15:09:03.233892 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dl6p4" podStartSLOduration=6.80510965 podStartE2EDuration="9.233877542s" podCreationTimestamp="2026-01-31 15:08:54 +0000 UTC" firstStartedPulling="2026-01-31 15:08:55.560674454 +0000 UTC m=+855.315412747" lastFinishedPulling="2026-01-31 15:08:57.989442346 +0000 UTC m=+857.744180639" observedRunningTime="2026-01-31 15:08:58.623519606 +0000 UTC m=+858.378257899" watchObservedRunningTime="2026-01-31 15:09:03.233877542 +0000 UTC m=+862.988615835" Jan 31 15:09:03 crc kubenswrapper[4763]: I0131 15:09:03.234736 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-5b76796566-wfzb5"] Jan 31 15:09:03 crc kubenswrapper[4763]: I0131 15:09:03.235392 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5b76796566-wfzb5" Jan 31 15:09:03 crc kubenswrapper[4763]: I0131 15:09:03.237273 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-service-cert" Jan 31 15:09:03 crc kubenswrapper[4763]: I0131 15:09:03.237438 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-pk6lc" Jan 31 15:09:03 crc kubenswrapper[4763]: I0131 15:09:03.258867 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5b76796566-wfzb5"] Jan 31 15:09:03 crc kubenswrapper[4763]: I0131 15:09:03.419268 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmdvh\" (UniqueName: \"kubernetes.io/projected/ff757490-bd0f-4140-9f70-e5ec9d26353f-kube-api-access-pmdvh\") pod \"infra-operator-controller-manager-5b76796566-wfzb5\" (UID: \"ff757490-bd0f-4140-9f70-e5ec9d26353f\") " pod="openstack-operators/infra-operator-controller-manager-5b76796566-wfzb5" Jan 31 15:09:03 crc kubenswrapper[4763]: I0131 15:09:03.419349 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ff757490-bd0f-4140-9f70-e5ec9d26353f-webhook-cert\") pod \"infra-operator-controller-manager-5b76796566-wfzb5\" (UID: \"ff757490-bd0f-4140-9f70-e5ec9d26353f\") " pod="openstack-operators/infra-operator-controller-manager-5b76796566-wfzb5" Jan 31 15:09:03 crc kubenswrapper[4763]: I0131 15:09:03.419499 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ff757490-bd0f-4140-9f70-e5ec9d26353f-apiservice-cert\") pod \"infra-operator-controller-manager-5b76796566-wfzb5\" (UID: \"ff757490-bd0f-4140-9f70-e5ec9d26353f\") " pod="openstack-operators/infra-operator-controller-manager-5b76796566-wfzb5" Jan 31 15:09:03 crc kubenswrapper[4763]: I0131 15:09:03.520673 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ff757490-bd0f-4140-9f70-e5ec9d26353f-apiservice-cert\") pod \"infra-operator-controller-manager-5b76796566-wfzb5\" (UID: \"ff757490-bd0f-4140-9f70-e5ec9d26353f\") " pod="openstack-operators/infra-operator-controller-manager-5b76796566-wfzb5" Jan 31 15:09:03 crc kubenswrapper[4763]: I0131 15:09:03.520758 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmdvh\" (UniqueName: \"kubernetes.io/projected/ff757490-bd0f-4140-9f70-e5ec9d26353f-kube-api-access-pmdvh\") pod \"infra-operator-controller-manager-5b76796566-wfzb5\" (UID: \"ff757490-bd0f-4140-9f70-e5ec9d26353f\") " pod="openstack-operators/infra-operator-controller-manager-5b76796566-wfzb5" Jan 31 15:09:03 crc kubenswrapper[4763]: I0131 15:09:03.520792 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ff757490-bd0f-4140-9f70-e5ec9d26353f-webhook-cert\") pod \"infra-operator-controller-manager-5b76796566-wfzb5\" (UID: \"ff757490-bd0f-4140-9f70-e5ec9d26353f\") " pod="openstack-operators/infra-operator-controller-manager-5b76796566-wfzb5" Jan 31 15:09:03 crc kubenswrapper[4763]: I0131 15:09:03.526256 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ff757490-bd0f-4140-9f70-e5ec9d26353f-webhook-cert\") pod \"infra-operator-controller-manager-5b76796566-wfzb5\" (UID: \"ff757490-bd0f-4140-9f70-e5ec9d26353f\") " pod="openstack-operators/infra-operator-controller-manager-5b76796566-wfzb5" Jan 31 15:09:03 crc kubenswrapper[4763]: I0131 15:09:03.529050 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ff757490-bd0f-4140-9f70-e5ec9d26353f-apiservice-cert\") pod \"infra-operator-controller-manager-5b76796566-wfzb5\" (UID: \"ff757490-bd0f-4140-9f70-e5ec9d26353f\") " pod="openstack-operators/infra-operator-controller-manager-5b76796566-wfzb5" Jan 31 15:09:03 crc kubenswrapper[4763]: I0131 15:09:03.540611 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmdvh\" (UniqueName: \"kubernetes.io/projected/ff757490-bd0f-4140-9f70-e5ec9d26353f-kube-api-access-pmdvh\") pod \"infra-operator-controller-manager-5b76796566-wfzb5\" (UID: \"ff757490-bd0f-4140-9f70-e5ec9d26353f\") " pod="openstack-operators/infra-operator-controller-manager-5b76796566-wfzb5" Jan 31 15:09:03 crc kubenswrapper[4763]: I0131 15:09:03.562378 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5b76796566-wfzb5" Jan 31 15:09:03 crc kubenswrapper[4763]: I0131 15:09:03.993208 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5b76796566-wfzb5"] Jan 31 15:09:04 crc kubenswrapper[4763]: I0131 15:09:04.639732 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5b76796566-wfzb5" event={"ID":"ff757490-bd0f-4140-9f70-e5ec9d26353f","Type":"ContainerStarted","Data":"ccb30435700f857e3fcda34fc789beb38c3fe560d9f9d1985ed5f669ef555514"} Jan 31 15:09:05 crc kubenswrapper[4763]: I0131 15:09:05.099126 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dl6p4" Jan 31 15:09:05 crc kubenswrapper[4763]: I0131 15:09:05.100504 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dl6p4" Jan 31 15:09:05 crc kubenswrapper[4763]: I0131 15:09:05.148181 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dl6p4" Jan 31 15:09:05 crc kubenswrapper[4763]: I0131 15:09:05.698138 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xx6l2" Jan 31 15:09:05 crc kubenswrapper[4763]: I0131 15:09:05.698197 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xx6l2" Jan 31 15:09:05 crc kubenswrapper[4763]: I0131 15:09:05.706850 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dl6p4" Jan 31 15:09:05 crc kubenswrapper[4763]: I0131 15:09:05.749747 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xx6l2" Jan 31 15:09:06 crc kubenswrapper[4763]: I0131 15:09:06.656275 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5b76796566-wfzb5" event={"ID":"ff757490-bd0f-4140-9f70-e5ec9d26353f","Type":"ContainerStarted","Data":"59374f00e9c06934e324014a2f2bd5c0e56a99a03ab8b33a467bbd32270380ad"} Jan 31 15:09:06 crc kubenswrapper[4763]: I0131 15:09:06.682066 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-5b76796566-wfzb5" podStartSLOduration=1.588242655 podStartE2EDuration="3.682048383s" podCreationTimestamp="2026-01-31 15:09:03 +0000 UTC" firstStartedPulling="2026-01-31 15:09:04.023012603 +0000 UTC m=+863.777750936" lastFinishedPulling="2026-01-31 15:09:06.116818381 +0000 UTC m=+865.871556664" observedRunningTime="2026-01-31 15:09:06.677991666 +0000 UTC m=+866.432729999" watchObservedRunningTime="2026-01-31 15:09:06.682048383 +0000 UTC m=+866.436786676" Jan 31 15:09:06 crc kubenswrapper[4763]: I0131 15:09:06.712937 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xx6l2" Jan 31 15:09:07 crc kubenswrapper[4763]: I0131 15:09:07.663918 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-5b76796566-wfzb5" Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.821000 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/openstack-galera-2"] Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.822565 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.826047 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"openshift-service-ca.crt" Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.826672 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"kube-root-ca.crt" Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.827900 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"galera-openstack-dockercfg-2b4kn" Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.828102 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"openstack-config-data" Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.830878 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"openstack-scripts" Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.842395 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/openstack-galera-0"] Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.843590 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.850120 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/openstack-galera-1"] Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.852642 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.860296 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-1"] Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.863202 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-2"] Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.869750 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-0"] Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.999334 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e5a89037-391b-4806-8f01-09ddd6a4d13e-kolla-config\") pod \"openstack-galera-1\" (UID: \"e5a89037-391b-4806-8f01-09ddd6a4d13e\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.999394 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5a89037-391b-4806-8f01-09ddd6a4d13e-operator-scripts\") pod \"openstack-galera-1\" (UID: \"e5a89037-391b-4806-8f01-09ddd6a4d13e\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.999432 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dc474c59-7d29-4ce0-86c8-07d96c462b4e-kolla-config\") pod \"openstack-galera-0\" (UID: \"dc474c59-7d29-4ce0-86c8-07d96c462b4e\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.999473 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cd0d5ccb-1d59-428e-9a53-17427cd0e5dc-kolla-config\") pod \"openstack-galera-2\" (UID: \"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.999509 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-1\" (UID: \"e5a89037-391b-4806-8f01-09ddd6a4d13e\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.999569 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e5a89037-391b-4806-8f01-09ddd6a4d13e-config-data-generated\") pod \"openstack-galera-1\" (UID: \"e5a89037-391b-4806-8f01-09ddd6a4d13e\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.999603 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8mfz\" (UniqueName: \"kubernetes.io/projected/e5a89037-391b-4806-8f01-09ddd6a4d13e-kube-api-access-w8mfz\") pod \"openstack-galera-1\" (UID: \"e5a89037-391b-4806-8f01-09ddd6a4d13e\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.999635 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dc474c59-7d29-4ce0-86c8-07d96c462b4e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dc474c59-7d29-4ce0-86c8-07d96c462b4e\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.999678 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxdxd\" (UniqueName: \"kubernetes.io/projected/dc474c59-7d29-4ce0-86c8-07d96c462b4e-kube-api-access-zxdxd\") pod \"openstack-galera-0\" (UID: \"dc474c59-7d29-4ce0-86c8-07d96c462b4e\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.999741 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dc474c59-7d29-4ce0-86c8-07d96c462b4e-config-data-default\") pod \"openstack-galera-0\" (UID: \"dc474c59-7d29-4ce0-86c8-07d96c462b4e\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.999771 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-2\" (UID: \"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:08.999806 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cd0d5ccb-1d59-428e-9a53-17427cd0e5dc-config-data-generated\") pod \"openstack-galera-2\" (UID: \"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:08.999879 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd0d5ccb-1d59-428e-9a53-17427cd0e5dc-operator-scripts\") pod \"openstack-galera-2\" (UID: \"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.000024 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpwmv\" (UniqueName: \"kubernetes.io/projected/cd0d5ccb-1d59-428e-9a53-17427cd0e5dc-kube-api-access-cpwmv\") pod \"openstack-galera-2\" (UID: \"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.000113 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cd0d5ccb-1d59-428e-9a53-17427cd0e5dc-config-data-default\") pod \"openstack-galera-2\" (UID: \"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.000207 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"dc474c59-7d29-4ce0-86c8-07d96c462b4e\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.000246 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e5a89037-391b-4806-8f01-09ddd6a4d13e-config-data-default\") pod \"openstack-galera-1\" (UID: \"e5a89037-391b-4806-8f01-09ddd6a4d13e\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.000298 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc474c59-7d29-4ce0-86c8-07d96c462b4e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dc474c59-7d29-4ce0-86c8-07d96c462b4e\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.101337 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxdxd\" (UniqueName: \"kubernetes.io/projected/dc474c59-7d29-4ce0-86c8-07d96c462b4e-kube-api-access-zxdxd\") pod \"openstack-galera-0\" (UID: \"dc474c59-7d29-4ce0-86c8-07d96c462b4e\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.101379 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dc474c59-7d29-4ce0-86c8-07d96c462b4e-config-data-default\") pod \"openstack-galera-0\" (UID: \"dc474c59-7d29-4ce0-86c8-07d96c462b4e\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.101396 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-2\" (UID: \"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.101412 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cd0d5ccb-1d59-428e-9a53-17427cd0e5dc-config-data-generated\") pod \"openstack-galera-2\" (UID: \"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.101433 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd0d5ccb-1d59-428e-9a53-17427cd0e5dc-operator-scripts\") pod \"openstack-galera-2\" (UID: \"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.101463 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpwmv\" (UniqueName: \"kubernetes.io/projected/cd0d5ccb-1d59-428e-9a53-17427cd0e5dc-kube-api-access-cpwmv\") pod \"openstack-galera-2\" (UID: \"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.101482 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cd0d5ccb-1d59-428e-9a53-17427cd0e5dc-config-data-default\") pod \"openstack-galera-2\" (UID: \"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.101515 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"dc474c59-7d29-4ce0-86c8-07d96c462b4e\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.101537 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e5a89037-391b-4806-8f01-09ddd6a4d13e-config-data-default\") pod \"openstack-galera-1\" (UID: \"e5a89037-391b-4806-8f01-09ddd6a4d13e\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.101560 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc474c59-7d29-4ce0-86c8-07d96c462b4e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dc474c59-7d29-4ce0-86c8-07d96c462b4e\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.101584 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e5a89037-391b-4806-8f01-09ddd6a4d13e-kolla-config\") pod \"openstack-galera-1\" (UID: \"e5a89037-391b-4806-8f01-09ddd6a4d13e\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.101601 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5a89037-391b-4806-8f01-09ddd6a4d13e-operator-scripts\") pod \"openstack-galera-1\" (UID: \"e5a89037-391b-4806-8f01-09ddd6a4d13e\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.101622 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dc474c59-7d29-4ce0-86c8-07d96c462b4e-kolla-config\") pod \"openstack-galera-0\" (UID: \"dc474c59-7d29-4ce0-86c8-07d96c462b4e\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.101645 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cd0d5ccb-1d59-428e-9a53-17427cd0e5dc-kolla-config\") pod \"openstack-galera-2\" (UID: \"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.101662 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-1\" (UID: \"e5a89037-391b-4806-8f01-09ddd6a4d13e\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.101679 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e5a89037-391b-4806-8f01-09ddd6a4d13e-config-data-generated\") pod \"openstack-galera-1\" (UID: \"e5a89037-391b-4806-8f01-09ddd6a4d13e\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.101713 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8mfz\" (UniqueName: \"kubernetes.io/projected/e5a89037-391b-4806-8f01-09ddd6a4d13e-kube-api-access-w8mfz\") pod \"openstack-galera-1\" (UID: \"e5a89037-391b-4806-8f01-09ddd6a4d13e\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.101764 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dc474c59-7d29-4ce0-86c8-07d96c462b4e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dc474c59-7d29-4ce0-86c8-07d96c462b4e\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.102248 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dc474c59-7d29-4ce0-86c8-07d96c462b4e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dc474c59-7d29-4ce0-86c8-07d96c462b4e\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.102550 4763 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-1\" (UID: \"e5a89037-391b-4806-8f01-09ddd6a4d13e\") device mount path \"/mnt/openstack/pv10\"" pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.103060 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e5a89037-391b-4806-8f01-09ddd6a4d13e-config-data-generated\") pod \"openstack-galera-1\" (UID: \"e5a89037-391b-4806-8f01-09ddd6a4d13e\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.103257 4763 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"dc474c59-7d29-4ce0-86c8-07d96c462b4e\") device mount path \"/mnt/openstack/pv07\"" pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.103272 4763 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-2\" (UID: \"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc\") device mount path \"/mnt/openstack/pv12\"" pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.103374 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cd0d5ccb-1d59-428e-9a53-17427cd0e5dc-kolla-config\") pod \"openstack-galera-2\" (UID: \"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.103494 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc474c59-7d29-4ce0-86c8-07d96c462b4e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dc474c59-7d29-4ce0-86c8-07d96c462b4e\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.103650 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dc474c59-7d29-4ce0-86c8-07d96c462b4e-kolla-config\") pod \"openstack-galera-0\" (UID: \"dc474c59-7d29-4ce0-86c8-07d96c462b4e\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.103771 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e5a89037-391b-4806-8f01-09ddd6a4d13e-kolla-config\") pod \"openstack-galera-1\" (UID: \"e5a89037-391b-4806-8f01-09ddd6a4d13e\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.103988 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cd0d5ccb-1d59-428e-9a53-17427cd0e5dc-config-data-generated\") pod \"openstack-galera-2\" (UID: \"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.104650 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cd0d5ccb-1d59-428e-9a53-17427cd0e5dc-config-data-default\") pod \"openstack-galera-2\" (UID: \"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.104715 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e5a89037-391b-4806-8f01-09ddd6a4d13e-config-data-default\") pod \"openstack-galera-1\" (UID: \"e5a89037-391b-4806-8f01-09ddd6a4d13e\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.104827 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dc474c59-7d29-4ce0-86c8-07d96c462b4e-config-data-default\") pod \"openstack-galera-0\" (UID: \"dc474c59-7d29-4ce0-86c8-07d96c462b4e\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.104983 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd0d5ccb-1d59-428e-9a53-17427cd0e5dc-operator-scripts\") pod \"openstack-galera-2\" (UID: \"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.105601 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5a89037-391b-4806-8f01-09ddd6a4d13e-operator-scripts\") pod \"openstack-galera-1\" (UID: \"e5a89037-391b-4806-8f01-09ddd6a4d13e\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.124558 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"dc474c59-7d29-4ce0-86c8-07d96c462b4e\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.126790 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-1\" (UID: \"e5a89037-391b-4806-8f01-09ddd6a4d13e\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.128876 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-2\" (UID: \"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.129684 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxdxd\" (UniqueName: \"kubernetes.io/projected/dc474c59-7d29-4ce0-86c8-07d96c462b4e-kube-api-access-zxdxd\") pod \"openstack-galera-0\" (UID: \"dc474c59-7d29-4ce0-86c8-07d96c462b4e\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.134584 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpwmv\" (UniqueName: \"kubernetes.io/projected/cd0d5ccb-1d59-428e-9a53-17427cd0e5dc-kube-api-access-cpwmv\") pod \"openstack-galera-2\" (UID: \"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.141844 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8mfz\" (UniqueName: \"kubernetes.io/projected/e5a89037-391b-4806-8f01-09ddd6a4d13e-kube-api-access-w8mfz\") pod \"openstack-galera-1\" (UID: \"e5a89037-391b-4806-8f01-09ddd6a4d13e\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.160182 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.177975 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.186451 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.471605 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-2"] Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.674923 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-2" event={"ID":"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc","Type":"ContainerStarted","Data":"e7a62c9e853bb819796306cfa81c4490baddece71bc86737097c21ff5c15cb05"} Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.739267 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-1"] Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.758116 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-0"] Jan 31 15:09:09 crc kubenswrapper[4763]: W0131 15:09:09.764982 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc474c59_7d29_4ce0_86c8_07d96c462b4e.slice/crio-e7d2c892bd28ee789e5a46a9e46a211734b0849f3ef86c66c65eca95f7d8991e WatchSource:0}: Error finding container e7d2c892bd28ee789e5a46a9e46a211734b0849f3ef86c66c65eca95f7d8991e: Status 404 returned error can't find the container with id e7d2c892bd28ee789e5a46a9e46a211734b0849f3ef86c66c65eca95f7d8991e Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.927979 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dl6p4"] Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.928350 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dl6p4" podUID="73b1db31-195c-41e8-9ab4-6e13e96600fa" containerName="registry-server" containerID="cri-o://3e1577b896f75ca932834cf67f6d200f963de8f29543fd5d0da3c00a8743117b" gracePeriod=2 Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.346495 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dl6p4" Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.420199 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73b1db31-195c-41e8-9ab4-6e13e96600fa-utilities\") pod \"73b1db31-195c-41e8-9ab4-6e13e96600fa\" (UID: \"73b1db31-195c-41e8-9ab4-6e13e96600fa\") " Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.420244 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rt7xf\" (UniqueName: \"kubernetes.io/projected/73b1db31-195c-41e8-9ab4-6e13e96600fa-kube-api-access-rt7xf\") pod \"73b1db31-195c-41e8-9ab4-6e13e96600fa\" (UID: \"73b1db31-195c-41e8-9ab4-6e13e96600fa\") " Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.420302 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73b1db31-195c-41e8-9ab4-6e13e96600fa-catalog-content\") pod \"73b1db31-195c-41e8-9ab4-6e13e96600fa\" (UID: \"73b1db31-195c-41e8-9ab4-6e13e96600fa\") " Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.421586 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73b1db31-195c-41e8-9ab4-6e13e96600fa-utilities" (OuterVolumeSpecName: "utilities") pod "73b1db31-195c-41e8-9ab4-6e13e96600fa" (UID: "73b1db31-195c-41e8-9ab4-6e13e96600fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.427182 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73b1db31-195c-41e8-9ab4-6e13e96600fa-kube-api-access-rt7xf" (OuterVolumeSpecName: "kube-api-access-rt7xf") pod "73b1db31-195c-41e8-9ab4-6e13e96600fa" (UID: "73b1db31-195c-41e8-9ab4-6e13e96600fa"). InnerVolumeSpecName "kube-api-access-rt7xf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.482352 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73b1db31-195c-41e8-9ab4-6e13e96600fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "73b1db31-195c-41e8-9ab4-6e13e96600fa" (UID: "73b1db31-195c-41e8-9ab4-6e13e96600fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.522021 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73b1db31-195c-41e8-9ab4-6e13e96600fa-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.522060 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rt7xf\" (UniqueName: \"kubernetes.io/projected/73b1db31-195c-41e8-9ab4-6e13e96600fa-kube-api-access-rt7xf\") on node \"crc\" DevicePath \"\"" Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.522072 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73b1db31-195c-41e8-9ab4-6e13e96600fa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.692393 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-1" event={"ID":"e5a89037-391b-4806-8f01-09ddd6a4d13e","Type":"ContainerStarted","Data":"0283241ede766938c9f3cc5927a25fdc80110beb7aff3558b75f482647214a8f"} Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.696282 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-0" event={"ID":"dc474c59-7d29-4ce0-86c8-07d96c462b4e","Type":"ContainerStarted","Data":"e7d2c892bd28ee789e5a46a9e46a211734b0849f3ef86c66c65eca95f7d8991e"} Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.698497 4763 generic.go:334] "Generic (PLEG): container finished" podID="73b1db31-195c-41e8-9ab4-6e13e96600fa" containerID="3e1577b896f75ca932834cf67f6d200f963de8f29543fd5d0da3c00a8743117b" exitCode=0 Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.698532 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dl6p4" event={"ID":"73b1db31-195c-41e8-9ab4-6e13e96600fa","Type":"ContainerDied","Data":"3e1577b896f75ca932834cf67f6d200f963de8f29543fd5d0da3c00a8743117b"} Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.698555 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dl6p4" event={"ID":"73b1db31-195c-41e8-9ab4-6e13e96600fa","Type":"ContainerDied","Data":"1ce5c6e1711277fcfbaf495466b9e5fe5c110ae0edd21c55bf7a8fa1d794558e"} Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.698574 4763 scope.go:117] "RemoveContainer" containerID="3e1577b896f75ca932834cf67f6d200f963de8f29543fd5d0da3c00a8743117b" Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.698695 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dl6p4" Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.728439 4763 scope.go:117] "RemoveContainer" containerID="34fbcf0f68f2b4bcc29cd6b0ec98665d1cb2deea5c4fd3a798efa2e8b779ef26" Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.745547 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dl6p4"] Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.753606 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dl6p4"] Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.761202 4763 scope.go:117] "RemoveContainer" containerID="4151039b4f49a7ff99cc6b11471e5efef757127e04cb78f85efb3f28ef0adfce" Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.777915 4763 scope.go:117] "RemoveContainer" containerID="3e1577b896f75ca932834cf67f6d200f963de8f29543fd5d0da3c00a8743117b" Jan 31 15:09:10 crc kubenswrapper[4763]: E0131 15:09:10.778428 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e1577b896f75ca932834cf67f6d200f963de8f29543fd5d0da3c00a8743117b\": container with ID starting with 3e1577b896f75ca932834cf67f6d200f963de8f29543fd5d0da3c00a8743117b not found: ID does not exist" containerID="3e1577b896f75ca932834cf67f6d200f963de8f29543fd5d0da3c00a8743117b" Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.778463 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e1577b896f75ca932834cf67f6d200f963de8f29543fd5d0da3c00a8743117b"} err="failed to get container status \"3e1577b896f75ca932834cf67f6d200f963de8f29543fd5d0da3c00a8743117b\": rpc error: code = NotFound desc = could not find container \"3e1577b896f75ca932834cf67f6d200f963de8f29543fd5d0da3c00a8743117b\": container with ID starting with 3e1577b896f75ca932834cf67f6d200f963de8f29543fd5d0da3c00a8743117b not found: ID does not exist" Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.778492 4763 scope.go:117] "RemoveContainer" containerID="34fbcf0f68f2b4bcc29cd6b0ec98665d1cb2deea5c4fd3a798efa2e8b779ef26" Jan 31 15:09:10 crc kubenswrapper[4763]: E0131 15:09:10.778836 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34fbcf0f68f2b4bcc29cd6b0ec98665d1cb2deea5c4fd3a798efa2e8b779ef26\": container with ID starting with 34fbcf0f68f2b4bcc29cd6b0ec98665d1cb2deea5c4fd3a798efa2e8b779ef26 not found: ID does not exist" containerID="34fbcf0f68f2b4bcc29cd6b0ec98665d1cb2deea5c4fd3a798efa2e8b779ef26" Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.778878 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34fbcf0f68f2b4bcc29cd6b0ec98665d1cb2deea5c4fd3a798efa2e8b779ef26"} err="failed to get container status \"34fbcf0f68f2b4bcc29cd6b0ec98665d1cb2deea5c4fd3a798efa2e8b779ef26\": rpc error: code = NotFound desc = could not find container \"34fbcf0f68f2b4bcc29cd6b0ec98665d1cb2deea5c4fd3a798efa2e8b779ef26\": container with ID starting with 34fbcf0f68f2b4bcc29cd6b0ec98665d1cb2deea5c4fd3a798efa2e8b779ef26 not found: ID does not exist" Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.778903 4763 scope.go:117] "RemoveContainer" containerID="4151039b4f49a7ff99cc6b11471e5efef757127e04cb78f85efb3f28ef0adfce" Jan 31 15:09:10 crc kubenswrapper[4763]: E0131 15:09:10.779385 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4151039b4f49a7ff99cc6b11471e5efef757127e04cb78f85efb3f28ef0adfce\": container with ID starting with 4151039b4f49a7ff99cc6b11471e5efef757127e04cb78f85efb3f28ef0adfce not found: ID does not exist" containerID="4151039b4f49a7ff99cc6b11471e5efef757127e04cb78f85efb3f28ef0adfce" Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.779421 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4151039b4f49a7ff99cc6b11471e5efef757127e04cb78f85efb3f28ef0adfce"} err="failed to get container status \"4151039b4f49a7ff99cc6b11471e5efef757127e04cb78f85efb3f28ef0adfce\": rpc error: code = NotFound desc = could not find container \"4151039b4f49a7ff99cc6b11471e5efef757127e04cb78f85efb3f28ef0adfce\": container with ID starting with 4151039b4f49a7ff99cc6b11471e5efef757127e04cb78f85efb3f28ef0adfce not found: ID does not exist" Jan 31 15:09:11 crc kubenswrapper[4763]: I0131 15:09:11.064399 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73b1db31-195c-41e8-9ab4-6e13e96600fa" path="/var/lib/kubelet/pods/73b1db31-195c-41e8-9ab4-6e13e96600fa/volumes" Jan 31 15:09:11 crc kubenswrapper[4763]: I0131 15:09:11.325736 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xx6l2"] Jan 31 15:09:11 crc kubenswrapper[4763]: I0131 15:09:11.325994 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xx6l2" podUID="1102b46b-1431-4abc-acf3-fc15238c9dec" containerName="registry-server" containerID="cri-o://23e3fe9f7b231da52853b276574f18bf79feb38b42567138be71d5b85f526157" gracePeriod=2 Jan 31 15:09:11 crc kubenswrapper[4763]: I0131 15:09:11.710541 4763 generic.go:334] "Generic (PLEG): container finished" podID="1102b46b-1431-4abc-acf3-fc15238c9dec" containerID="23e3fe9f7b231da52853b276574f18bf79feb38b42567138be71d5b85f526157" exitCode=0 Jan 31 15:09:11 crc kubenswrapper[4763]: I0131 15:09:11.710625 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xx6l2" event={"ID":"1102b46b-1431-4abc-acf3-fc15238c9dec","Type":"ContainerDied","Data":"23e3fe9f7b231da52853b276574f18bf79feb38b42567138be71d5b85f526157"} Jan 31 15:09:11 crc kubenswrapper[4763]: I0131 15:09:11.710871 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xx6l2" event={"ID":"1102b46b-1431-4abc-acf3-fc15238c9dec","Type":"ContainerDied","Data":"df2ae68589cdf5f2daa2674b8c6d3f469b0de474439fd474c5f8daf2c17ae296"} Jan 31 15:09:11 crc kubenswrapper[4763]: I0131 15:09:11.710892 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df2ae68589cdf5f2daa2674b8c6d3f469b0de474439fd474c5f8daf2c17ae296" Jan 31 15:09:11 crc kubenswrapper[4763]: I0131 15:09:11.722153 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xx6l2" Jan 31 15:09:11 crc kubenswrapper[4763]: I0131 15:09:11.849513 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1102b46b-1431-4abc-acf3-fc15238c9dec-utilities\") pod \"1102b46b-1431-4abc-acf3-fc15238c9dec\" (UID: \"1102b46b-1431-4abc-acf3-fc15238c9dec\") " Jan 31 15:09:11 crc kubenswrapper[4763]: I0131 15:09:11.849622 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fl6td\" (UniqueName: \"kubernetes.io/projected/1102b46b-1431-4abc-acf3-fc15238c9dec-kube-api-access-fl6td\") pod \"1102b46b-1431-4abc-acf3-fc15238c9dec\" (UID: \"1102b46b-1431-4abc-acf3-fc15238c9dec\") " Jan 31 15:09:11 crc kubenswrapper[4763]: I0131 15:09:11.849664 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1102b46b-1431-4abc-acf3-fc15238c9dec-catalog-content\") pod \"1102b46b-1431-4abc-acf3-fc15238c9dec\" (UID: \"1102b46b-1431-4abc-acf3-fc15238c9dec\") " Jan 31 15:09:11 crc kubenswrapper[4763]: I0131 15:09:11.851590 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1102b46b-1431-4abc-acf3-fc15238c9dec-utilities" (OuterVolumeSpecName: "utilities") pod "1102b46b-1431-4abc-acf3-fc15238c9dec" (UID: "1102b46b-1431-4abc-acf3-fc15238c9dec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:09:11 crc kubenswrapper[4763]: I0131 15:09:11.861656 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1102b46b-1431-4abc-acf3-fc15238c9dec-kube-api-access-fl6td" (OuterVolumeSpecName: "kube-api-access-fl6td") pod "1102b46b-1431-4abc-acf3-fc15238c9dec" (UID: "1102b46b-1431-4abc-acf3-fc15238c9dec"). InnerVolumeSpecName "kube-api-access-fl6td". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:09:11 crc kubenswrapper[4763]: I0131 15:09:11.895818 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1102b46b-1431-4abc-acf3-fc15238c9dec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1102b46b-1431-4abc-acf3-fc15238c9dec" (UID: "1102b46b-1431-4abc-acf3-fc15238c9dec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:09:11 crc kubenswrapper[4763]: I0131 15:09:11.950750 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fl6td\" (UniqueName: \"kubernetes.io/projected/1102b46b-1431-4abc-acf3-fc15238c9dec-kube-api-access-fl6td\") on node \"crc\" DevicePath \"\"" Jan 31 15:09:11 crc kubenswrapper[4763]: I0131 15:09:11.950790 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1102b46b-1431-4abc-acf3-fc15238c9dec-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:09:11 crc kubenswrapper[4763]: I0131 15:09:11.950804 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1102b46b-1431-4abc-acf3-fc15238c9dec-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:09:12 crc kubenswrapper[4763]: I0131 15:09:12.717165 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xx6l2" Jan 31 15:09:12 crc kubenswrapper[4763]: I0131 15:09:12.749231 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xx6l2"] Jan 31 15:09:12 crc kubenswrapper[4763]: I0131 15:09:12.752873 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xx6l2"] Jan 31 15:09:13 crc kubenswrapper[4763]: I0131 15:09:13.047891 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1102b46b-1431-4abc-acf3-fc15238c9dec" path="/var/lib/kubelet/pods/1102b46b-1431-4abc-acf3-fc15238c9dec/volumes" Jan 31 15:09:13 crc kubenswrapper[4763]: I0131 15:09:13.568441 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-5b76796566-wfzb5" Jan 31 15:09:19 crc kubenswrapper[4763]: I0131 15:09:19.338581 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-l9x4g"] Jan 31 15:09:19 crc kubenswrapper[4763]: E0131 15:09:19.339249 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1102b46b-1431-4abc-acf3-fc15238c9dec" containerName="extract-content" Jan 31 15:09:19 crc kubenswrapper[4763]: I0131 15:09:19.339355 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="1102b46b-1431-4abc-acf3-fc15238c9dec" containerName="extract-content" Jan 31 15:09:19 crc kubenswrapper[4763]: E0131 15:09:19.339379 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73b1db31-195c-41e8-9ab4-6e13e96600fa" containerName="extract-utilities" Jan 31 15:09:19 crc kubenswrapper[4763]: I0131 15:09:19.339388 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="73b1db31-195c-41e8-9ab4-6e13e96600fa" containerName="extract-utilities" Jan 31 15:09:19 crc kubenswrapper[4763]: E0131 15:09:19.339398 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1102b46b-1431-4abc-acf3-fc15238c9dec" containerName="registry-server" Jan 31 15:09:19 crc kubenswrapper[4763]: I0131 15:09:19.339406 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="1102b46b-1431-4abc-acf3-fc15238c9dec" containerName="registry-server" Jan 31 15:09:19 crc kubenswrapper[4763]: E0131 15:09:19.339417 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73b1db31-195c-41e8-9ab4-6e13e96600fa" containerName="registry-server" Jan 31 15:09:19 crc kubenswrapper[4763]: I0131 15:09:19.339425 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="73b1db31-195c-41e8-9ab4-6e13e96600fa" containerName="registry-server" Jan 31 15:09:19 crc kubenswrapper[4763]: E0131 15:09:19.339439 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73b1db31-195c-41e8-9ab4-6e13e96600fa" containerName="extract-content" Jan 31 15:09:19 crc kubenswrapper[4763]: I0131 15:09:19.339447 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="73b1db31-195c-41e8-9ab4-6e13e96600fa" containerName="extract-content" Jan 31 15:09:19 crc kubenswrapper[4763]: E0131 15:09:19.339458 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1102b46b-1431-4abc-acf3-fc15238c9dec" containerName="extract-utilities" Jan 31 15:09:19 crc kubenswrapper[4763]: I0131 15:09:19.339466 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="1102b46b-1431-4abc-acf3-fc15238c9dec" containerName="extract-utilities" Jan 31 15:09:19 crc kubenswrapper[4763]: I0131 15:09:19.339570 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="73b1db31-195c-41e8-9ab4-6e13e96600fa" containerName="registry-server" Jan 31 15:09:19 crc kubenswrapper[4763]: I0131 15:09:19.339580 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="1102b46b-1431-4abc-acf3-fc15238c9dec" containerName="registry-server" Jan 31 15:09:19 crc kubenswrapper[4763]: I0131 15:09:19.339991 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-l9x4g" Jan 31 15:09:19 crc kubenswrapper[4763]: I0131 15:09:19.342268 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-index-dockercfg-l7lb8" Jan 31 15:09:19 crc kubenswrapper[4763]: I0131 15:09:19.350667 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-l9x4g"] Jan 31 15:09:19 crc kubenswrapper[4763]: I0131 15:09:19.455610 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvww4\" (UniqueName: \"kubernetes.io/projected/6fa47f40-fce4-4e57-aebb-3313c4c996dd-kube-api-access-vvww4\") pod \"rabbitmq-cluster-operator-index-l9x4g\" (UID: \"6fa47f40-fce4-4e57-aebb-3313c4c996dd\") " pod="openstack-operators/rabbitmq-cluster-operator-index-l9x4g" Jan 31 15:09:19 crc kubenswrapper[4763]: I0131 15:09:19.557390 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvww4\" (UniqueName: \"kubernetes.io/projected/6fa47f40-fce4-4e57-aebb-3313c4c996dd-kube-api-access-vvww4\") pod \"rabbitmq-cluster-operator-index-l9x4g\" (UID: \"6fa47f40-fce4-4e57-aebb-3313c4c996dd\") " pod="openstack-operators/rabbitmq-cluster-operator-index-l9x4g" Jan 31 15:09:19 crc kubenswrapper[4763]: I0131 15:09:19.577576 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvww4\" (UniqueName: \"kubernetes.io/projected/6fa47f40-fce4-4e57-aebb-3313c4c996dd-kube-api-access-vvww4\") pod \"rabbitmq-cluster-operator-index-l9x4g\" (UID: \"6fa47f40-fce4-4e57-aebb-3313c4c996dd\") " pod="openstack-operators/rabbitmq-cluster-operator-index-l9x4g" Jan 31 15:09:19 crc kubenswrapper[4763]: I0131 15:09:19.708355 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-l9x4g" Jan 31 15:09:21 crc kubenswrapper[4763]: I0131 15:09:21.756592 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-l9x4g"] Jan 31 15:09:21 crc kubenswrapper[4763]: W0131 15:09:21.764414 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fa47f40_fce4_4e57_aebb_3313c4c996dd.slice/crio-b5acb9dd6631da5850f2b0a900cab783ee79ccaaad65b3866952f7c3e37d3636 WatchSource:0}: Error finding container b5acb9dd6631da5850f2b0a900cab783ee79ccaaad65b3866952f7c3e37d3636: Status 404 returned error can't find the container with id b5acb9dd6631da5850f2b0a900cab783ee79ccaaad65b3866952f7c3e37d3636 Jan 31 15:09:21 crc kubenswrapper[4763]: I0131 15:09:21.783653 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-l9x4g" event={"ID":"6fa47f40-fce4-4e57-aebb-3313c4c996dd","Type":"ContainerStarted","Data":"b5acb9dd6631da5850f2b0a900cab783ee79ccaaad65b3866952f7c3e37d3636"} Jan 31 15:09:22 crc kubenswrapper[4763]: I0131 15:09:22.789892 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-1" event={"ID":"e5a89037-391b-4806-8f01-09ddd6a4d13e","Type":"ContainerStarted","Data":"b85c3b69f3888f724d07fc1d22586bd6d61c9457a894e1c50be4e3612cb4f38b"} Jan 31 15:09:22 crc kubenswrapper[4763]: I0131 15:09:22.792485 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-0" event={"ID":"dc474c59-7d29-4ce0-86c8-07d96c462b4e","Type":"ContainerStarted","Data":"ce02934a39037592e12671864428578327605869fa422a0b14f2952f37b4fe7f"} Jan 31 15:09:22 crc kubenswrapper[4763]: I0131 15:09:22.794134 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-2" event={"ID":"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc","Type":"ContainerStarted","Data":"3b7aa4bbd87ecb3b6f56a81f7b6a6eb39d741c6c8714443fd00fef83557cbf4e"} Jan 31 15:09:24 crc kubenswrapper[4763]: I0131 15:09:24.750947 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/memcached-0"] Jan 31 15:09:24 crc kubenswrapper[4763]: I0131 15:09:24.751982 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/memcached-0" Jan 31 15:09:24 crc kubenswrapper[4763]: I0131 15:09:24.754019 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"memcached-memcached-dockercfg-ltl5x" Jan 31 15:09:24 crc kubenswrapper[4763]: I0131 15:09:24.755065 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"memcached-config-data" Jan 31 15:09:24 crc kubenswrapper[4763]: I0131 15:09:24.774751 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/memcached-0"] Jan 31 15:09:24 crc kubenswrapper[4763]: I0131 15:09:24.836309 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m47f5\" (UniqueName: \"kubernetes.io/projected/ecb69fa0-2df1-477e-a257-05e0f1dd1c76-kube-api-access-m47f5\") pod \"memcached-0\" (UID: \"ecb69fa0-2df1-477e-a257-05e0f1dd1c76\") " pod="swift-kuttl-tests/memcached-0" Jan 31 15:09:24 crc kubenswrapper[4763]: I0131 15:09:24.836367 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ecb69fa0-2df1-477e-a257-05e0f1dd1c76-config-data\") pod \"memcached-0\" (UID: \"ecb69fa0-2df1-477e-a257-05e0f1dd1c76\") " pod="swift-kuttl-tests/memcached-0" Jan 31 15:09:24 crc kubenswrapper[4763]: I0131 15:09:24.836390 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ecb69fa0-2df1-477e-a257-05e0f1dd1c76-kolla-config\") pod \"memcached-0\" (UID: \"ecb69fa0-2df1-477e-a257-05e0f1dd1c76\") " pod="swift-kuttl-tests/memcached-0" Jan 31 15:09:24 crc kubenswrapper[4763]: I0131 15:09:24.938013 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m47f5\" (UniqueName: \"kubernetes.io/projected/ecb69fa0-2df1-477e-a257-05e0f1dd1c76-kube-api-access-m47f5\") pod \"memcached-0\" (UID: \"ecb69fa0-2df1-477e-a257-05e0f1dd1c76\") " pod="swift-kuttl-tests/memcached-0" Jan 31 15:09:24 crc kubenswrapper[4763]: I0131 15:09:24.938059 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ecb69fa0-2df1-477e-a257-05e0f1dd1c76-config-data\") pod \"memcached-0\" (UID: \"ecb69fa0-2df1-477e-a257-05e0f1dd1c76\") " pod="swift-kuttl-tests/memcached-0" Jan 31 15:09:24 crc kubenswrapper[4763]: I0131 15:09:24.938078 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ecb69fa0-2df1-477e-a257-05e0f1dd1c76-kolla-config\") pod \"memcached-0\" (UID: \"ecb69fa0-2df1-477e-a257-05e0f1dd1c76\") " pod="swift-kuttl-tests/memcached-0" Jan 31 15:09:24 crc kubenswrapper[4763]: I0131 15:09:24.938847 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ecb69fa0-2df1-477e-a257-05e0f1dd1c76-kolla-config\") pod \"memcached-0\" (UID: \"ecb69fa0-2df1-477e-a257-05e0f1dd1c76\") " pod="swift-kuttl-tests/memcached-0" Jan 31 15:09:24 crc kubenswrapper[4763]: I0131 15:09:24.938918 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ecb69fa0-2df1-477e-a257-05e0f1dd1c76-config-data\") pod \"memcached-0\" (UID: \"ecb69fa0-2df1-477e-a257-05e0f1dd1c76\") " pod="swift-kuttl-tests/memcached-0" Jan 31 15:09:24 crc kubenswrapper[4763]: I0131 15:09:24.958557 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m47f5\" (UniqueName: \"kubernetes.io/projected/ecb69fa0-2df1-477e-a257-05e0f1dd1c76-kube-api-access-m47f5\") pod \"memcached-0\" (UID: \"ecb69fa0-2df1-477e-a257-05e0f1dd1c76\") " pod="swift-kuttl-tests/memcached-0" Jan 31 15:09:25 crc kubenswrapper[4763]: I0131 15:09:25.111860 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/memcached-0" Jan 31 15:09:25 crc kubenswrapper[4763]: I0131 15:09:25.817152 4763 generic.go:334] "Generic (PLEG): container finished" podID="dc474c59-7d29-4ce0-86c8-07d96c462b4e" containerID="ce02934a39037592e12671864428578327605869fa422a0b14f2952f37b4fe7f" exitCode=0 Jan 31 15:09:25 crc kubenswrapper[4763]: I0131 15:09:25.817324 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-0" event={"ID":"dc474c59-7d29-4ce0-86c8-07d96c462b4e","Type":"ContainerDied","Data":"ce02934a39037592e12671864428578327605869fa422a0b14f2952f37b4fe7f"} Jan 31 15:09:25 crc kubenswrapper[4763]: I0131 15:09:25.820321 4763 generic.go:334] "Generic (PLEG): container finished" podID="cd0d5ccb-1d59-428e-9a53-17427cd0e5dc" containerID="3b7aa4bbd87ecb3b6f56a81f7b6a6eb39d741c6c8714443fd00fef83557cbf4e" exitCode=0 Jan 31 15:09:25 crc kubenswrapper[4763]: I0131 15:09:25.820384 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-2" event={"ID":"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc","Type":"ContainerDied","Data":"3b7aa4bbd87ecb3b6f56a81f7b6a6eb39d741c6c8714443fd00fef83557cbf4e"} Jan 31 15:09:25 crc kubenswrapper[4763]: I0131 15:09:25.828708 4763 generic.go:334] "Generic (PLEG): container finished" podID="e5a89037-391b-4806-8f01-09ddd6a4d13e" containerID="b85c3b69f3888f724d07fc1d22586bd6d61c9457a894e1c50be4e3612cb4f38b" exitCode=0 Jan 31 15:09:25 crc kubenswrapper[4763]: I0131 15:09:25.828764 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-1" event={"ID":"e5a89037-391b-4806-8f01-09ddd6a4d13e","Type":"ContainerDied","Data":"b85c3b69f3888f724d07fc1d22586bd6d61c9457a894e1c50be4e3612cb4f38b"} Jan 31 15:09:26 crc kubenswrapper[4763]: I0131 15:09:26.248506 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/memcached-0"] Jan 31 15:09:26 crc kubenswrapper[4763]: I0131 15:09:26.840822 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-l9x4g" event={"ID":"6fa47f40-fce4-4e57-aebb-3313c4c996dd","Type":"ContainerStarted","Data":"5a7d5b09f32d7460f5ea0260edf43a51f47ca11e4d1a7f943b18f279f6393781"} Jan 31 15:09:26 crc kubenswrapper[4763]: I0131 15:09:26.842841 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/memcached-0" event={"ID":"ecb69fa0-2df1-477e-a257-05e0f1dd1c76","Type":"ContainerStarted","Data":"a23813ee62437cd5e084ebd847143fc9fbe2b5e309aa2320771ea58c5eb7ba7c"} Jan 31 15:09:26 crc kubenswrapper[4763]: I0131 15:09:26.844409 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-1" event={"ID":"e5a89037-391b-4806-8f01-09ddd6a4d13e","Type":"ContainerStarted","Data":"e1c33ef385ad44dbf4cd4621deeafe477af76df1161ec9efe735c666b6660d89"} Jan 31 15:09:26 crc kubenswrapper[4763]: I0131 15:09:26.846663 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-0" event={"ID":"dc474c59-7d29-4ce0-86c8-07d96c462b4e","Type":"ContainerStarted","Data":"e1e4b38c6e517d684a2ae323e5d2ad612b2b3909928f443ab242ac2744b4be7d"} Jan 31 15:09:26 crc kubenswrapper[4763]: I0131 15:09:26.848922 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-2" event={"ID":"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc","Type":"ContainerStarted","Data":"16c7b86549cc8bf6c8b6913433ebe7c9e1038b19015e1aad91f8610a8ce92baf"} Jan 31 15:09:26 crc kubenswrapper[4763]: I0131 15:09:26.858824 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-index-l9x4g" podStartSLOduration=3.8290526849999997 podStartE2EDuration="7.858804486s" podCreationTimestamp="2026-01-31 15:09:19 +0000 UTC" firstStartedPulling="2026-01-31 15:09:21.774907265 +0000 UTC m=+881.529645558" lastFinishedPulling="2026-01-31 15:09:25.804659066 +0000 UTC m=+885.559397359" observedRunningTime="2026-01-31 15:09:26.854649967 +0000 UTC m=+886.609403210" watchObservedRunningTime="2026-01-31 15:09:26.858804486 +0000 UTC m=+886.613542779" Jan 31 15:09:26 crc kubenswrapper[4763]: I0131 15:09:26.873674 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/openstack-galera-2" podStartSLOduration=7.666174376 podStartE2EDuration="19.873657696s" podCreationTimestamp="2026-01-31 15:09:07 +0000 UTC" firstStartedPulling="2026-01-31 15:09:09.481947571 +0000 UTC m=+869.236685874" lastFinishedPulling="2026-01-31 15:09:21.689430911 +0000 UTC m=+881.444169194" observedRunningTime="2026-01-31 15:09:26.869523118 +0000 UTC m=+886.624261411" watchObservedRunningTime="2026-01-31 15:09:26.873657696 +0000 UTC m=+886.628395989" Jan 31 15:09:26 crc kubenswrapper[4763]: I0131 15:09:26.891820 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/openstack-galera-0" podStartSLOduration=7.913623523 podStartE2EDuration="19.891805693s" podCreationTimestamp="2026-01-31 15:09:07 +0000 UTC" firstStartedPulling="2026-01-31 15:09:09.767273312 +0000 UTC m=+869.522011605" lastFinishedPulling="2026-01-31 15:09:21.745455482 +0000 UTC m=+881.500193775" observedRunningTime="2026-01-31 15:09:26.888904227 +0000 UTC m=+886.643642520" watchObservedRunningTime="2026-01-31 15:09:26.891805693 +0000 UTC m=+886.646543986" Jan 31 15:09:26 crc kubenswrapper[4763]: I0131 15:09:26.914547 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/openstack-galera-1" podStartSLOduration=7.95658511 podStartE2EDuration="19.914532099s" podCreationTimestamp="2026-01-31 15:09:07 +0000 UTC" firstStartedPulling="2026-01-31 15:09:09.746317972 +0000 UTC m=+869.501056275" lastFinishedPulling="2026-01-31 15:09:21.704264951 +0000 UTC m=+881.459003264" observedRunningTime="2026-01-31 15:09:26.91263119 +0000 UTC m=+886.667369483" watchObservedRunningTime="2026-01-31 15:09:26.914532099 +0000 UTC m=+886.669270382" Jan 31 15:09:28 crc kubenswrapper[4763]: I0131 15:09:28.863147 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/memcached-0" event={"ID":"ecb69fa0-2df1-477e-a257-05e0f1dd1c76","Type":"ContainerStarted","Data":"49500059f695e13e5c01f24153da260ca6dd91529719115239e3e0e86165f34a"} Jan 31 15:09:28 crc kubenswrapper[4763]: I0131 15:09:28.863434 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/memcached-0" Jan 31 15:09:28 crc kubenswrapper[4763]: I0131 15:09:28.885094 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/memcached-0" podStartSLOduration=2.738730083 podStartE2EDuration="4.885076671s" podCreationTimestamp="2026-01-31 15:09:24 +0000 UTC" firstStartedPulling="2026-01-31 15:09:26.257240321 +0000 UTC m=+886.011978614" lastFinishedPulling="2026-01-31 15:09:28.403586909 +0000 UTC m=+888.158325202" observedRunningTime="2026-01-31 15:09:28.882911194 +0000 UTC m=+888.637649497" watchObservedRunningTime="2026-01-31 15:09:28.885076671 +0000 UTC m=+888.639814964" Jan 31 15:09:29 crc kubenswrapper[4763]: I0131 15:09:29.161446 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:29 crc kubenswrapper[4763]: I0131 15:09:29.161524 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:29 crc kubenswrapper[4763]: I0131 15:09:29.180909 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:29 crc kubenswrapper[4763]: I0131 15:09:29.181834 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:29 crc kubenswrapper[4763]: I0131 15:09:29.186783 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:29 crc kubenswrapper[4763]: I0131 15:09:29.186820 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:29 crc kubenswrapper[4763]: I0131 15:09:29.709253 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/rabbitmq-cluster-operator-index-l9x4g" Jan 31 15:09:29 crc kubenswrapper[4763]: I0131 15:09:29.709368 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/rabbitmq-cluster-operator-index-l9x4g" Jan 31 15:09:29 crc kubenswrapper[4763]: I0131 15:09:29.754031 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/rabbitmq-cluster-operator-index-l9x4g" Jan 31 15:09:30 crc kubenswrapper[4763]: I0131 15:09:30.911816 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/rabbitmq-cluster-operator-index-l9x4g" Jan 31 15:09:35 crc kubenswrapper[4763]: I0131 15:09:35.113056 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/memcached-0" Jan 31 15:09:35 crc kubenswrapper[4763]: I0131 15:09:35.409202 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:35 crc kubenswrapper[4763]: I0131 15:09:35.472732 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:37 crc kubenswrapper[4763]: I0131 15:09:37.828603 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/root-account-create-update-jh8vr"] Jan 31 15:09:37 crc kubenswrapper[4763]: I0131 15:09:37.830391 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-jh8vr" Jan 31 15:09:37 crc kubenswrapper[4763]: I0131 15:09:37.833374 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"openstack-mariadb-root-db-secret" Jan 31 15:09:37 crc kubenswrapper[4763]: I0131 15:09:37.844044 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/root-account-create-update-jh8vr"] Jan 31 15:09:37 crc kubenswrapper[4763]: I0131 15:09:37.984177 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/196347d8-7892-4b32-8bc2-0127439a95f0-operator-scripts\") pod \"root-account-create-update-jh8vr\" (UID: \"196347d8-7892-4b32-8bc2-0127439a95f0\") " pod="swift-kuttl-tests/root-account-create-update-jh8vr" Jan 31 15:09:37 crc kubenswrapper[4763]: I0131 15:09:37.984320 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kwc8\" (UniqueName: \"kubernetes.io/projected/196347d8-7892-4b32-8bc2-0127439a95f0-kube-api-access-8kwc8\") pod \"root-account-create-update-jh8vr\" (UID: \"196347d8-7892-4b32-8bc2-0127439a95f0\") " pod="swift-kuttl-tests/root-account-create-update-jh8vr" Jan 31 15:09:38 crc kubenswrapper[4763]: I0131 15:09:38.085821 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/196347d8-7892-4b32-8bc2-0127439a95f0-operator-scripts\") pod \"root-account-create-update-jh8vr\" (UID: \"196347d8-7892-4b32-8bc2-0127439a95f0\") " pod="swift-kuttl-tests/root-account-create-update-jh8vr" Jan 31 15:09:38 crc kubenswrapper[4763]: I0131 15:09:38.085881 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kwc8\" (UniqueName: \"kubernetes.io/projected/196347d8-7892-4b32-8bc2-0127439a95f0-kube-api-access-8kwc8\") pod \"root-account-create-update-jh8vr\" (UID: \"196347d8-7892-4b32-8bc2-0127439a95f0\") " pod="swift-kuttl-tests/root-account-create-update-jh8vr" Jan 31 15:09:38 crc kubenswrapper[4763]: I0131 15:09:38.086843 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/196347d8-7892-4b32-8bc2-0127439a95f0-operator-scripts\") pod \"root-account-create-update-jh8vr\" (UID: \"196347d8-7892-4b32-8bc2-0127439a95f0\") " pod="swift-kuttl-tests/root-account-create-update-jh8vr" Jan 31 15:09:38 crc kubenswrapper[4763]: I0131 15:09:38.105867 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kwc8\" (UniqueName: \"kubernetes.io/projected/196347d8-7892-4b32-8bc2-0127439a95f0-kube-api-access-8kwc8\") pod \"root-account-create-update-jh8vr\" (UID: \"196347d8-7892-4b32-8bc2-0127439a95f0\") " pod="swift-kuttl-tests/root-account-create-update-jh8vr" Jan 31 15:09:38 crc kubenswrapper[4763]: I0131 15:09:38.151934 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-jh8vr" Jan 31 15:09:38 crc kubenswrapper[4763]: I0131 15:09:38.600529 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/root-account-create-update-jh8vr"] Jan 31 15:09:38 crc kubenswrapper[4763]: W0131 15:09:38.606601 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod196347d8_7892_4b32_8bc2_0127439a95f0.slice/crio-97e182eb74bedfe8eaf315c1f144ec59ed2098d566229b1fb8133b578dde591b WatchSource:0}: Error finding container 97e182eb74bedfe8eaf315c1f144ec59ed2098d566229b1fb8133b578dde591b: Status 404 returned error can't find the container with id 97e182eb74bedfe8eaf315c1f144ec59ed2098d566229b1fb8133b578dde591b Jan 31 15:09:38 crc kubenswrapper[4763]: I0131 15:09:38.926398 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/root-account-create-update-jh8vr" event={"ID":"196347d8-7892-4b32-8bc2-0127439a95f0","Type":"ContainerStarted","Data":"84fd8b869419477646b3303789d7d4ce59277c7782af2bb140c22752aadb6987"} Jan 31 15:09:38 crc kubenswrapper[4763]: I0131 15:09:38.927500 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/root-account-create-update-jh8vr" event={"ID":"196347d8-7892-4b32-8bc2-0127439a95f0","Type":"ContainerStarted","Data":"97e182eb74bedfe8eaf315c1f144ec59ed2098d566229b1fb8133b578dde591b"} Jan 31 15:09:38 crc kubenswrapper[4763]: I0131 15:09:38.940429 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/root-account-create-update-jh8vr" podStartSLOduration=1.940399558 podStartE2EDuration="1.940399558s" podCreationTimestamp="2026-01-31 15:09:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:09:38.937265696 +0000 UTC m=+898.692003989" watchObservedRunningTime="2026-01-31 15:09:38.940399558 +0000 UTC m=+898.695137911" Jan 31 15:09:39 crc kubenswrapper[4763]: I0131 15:09:39.182687 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr"] Jan 31 15:09:39 crc kubenswrapper[4763]: I0131 15:09:39.184923 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr" Jan 31 15:09:39 crc kubenswrapper[4763]: I0131 15:09:39.188321 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-rrv7w" Jan 31 15:09:39 crc kubenswrapper[4763]: I0131 15:09:39.222954 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr"] Jan 31 15:09:39 crc kubenswrapper[4763]: I0131 15:09:39.229515 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b6835994-86f5-4950-b010-780530fceffe-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr\" (UID: \"b6835994-86f5-4950-b010-780530fceffe\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr" Jan 31 15:09:39 crc kubenswrapper[4763]: I0131 15:09:39.229650 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b6835994-86f5-4950-b010-780530fceffe-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr\" (UID: \"b6835994-86f5-4950-b010-780530fceffe\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr" Jan 31 15:09:39 crc kubenswrapper[4763]: I0131 15:09:39.229687 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttfwn\" (UniqueName: \"kubernetes.io/projected/b6835994-86f5-4950-b010-780530fceffe-kube-api-access-ttfwn\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr\" (UID: \"b6835994-86f5-4950-b010-780530fceffe\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr" Jan 31 15:09:39 crc kubenswrapper[4763]: I0131 15:09:39.331858 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b6835994-86f5-4950-b010-780530fceffe-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr\" (UID: \"b6835994-86f5-4950-b010-780530fceffe\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr" Jan 31 15:09:39 crc kubenswrapper[4763]: I0131 15:09:39.332026 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttfwn\" (UniqueName: \"kubernetes.io/projected/b6835994-86f5-4950-b010-780530fceffe-kube-api-access-ttfwn\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr\" (UID: \"b6835994-86f5-4950-b010-780530fceffe\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr" Jan 31 15:09:39 crc kubenswrapper[4763]: I0131 15:09:39.332071 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b6835994-86f5-4950-b010-780530fceffe-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr\" (UID: \"b6835994-86f5-4950-b010-780530fceffe\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr" Jan 31 15:09:39 crc kubenswrapper[4763]: I0131 15:09:39.332370 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b6835994-86f5-4950-b010-780530fceffe-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr\" (UID: \"b6835994-86f5-4950-b010-780530fceffe\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr" Jan 31 15:09:39 crc kubenswrapper[4763]: I0131 15:09:39.332861 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b6835994-86f5-4950-b010-780530fceffe-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr\" (UID: \"b6835994-86f5-4950-b010-780530fceffe\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr" Jan 31 15:09:39 crc kubenswrapper[4763]: I0131 15:09:39.362937 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttfwn\" (UniqueName: \"kubernetes.io/projected/b6835994-86f5-4950-b010-780530fceffe-kube-api-access-ttfwn\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr\" (UID: \"b6835994-86f5-4950-b010-780530fceffe\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr" Jan 31 15:09:39 crc kubenswrapper[4763]: I0131 15:09:39.520617 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr" Jan 31 15:09:39 crc kubenswrapper[4763]: I0131 15:09:39.935949 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr"] Jan 31 15:09:39 crc kubenswrapper[4763]: W0131 15:09:39.952040 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6835994_86f5_4950_b010_780530fceffe.slice/crio-e09e065931494d3ba676b6c94d088c34d670811ef1849e321bdfdd21bb343d96 WatchSource:0}: Error finding container e09e065931494d3ba676b6c94d088c34d670811ef1849e321bdfdd21bb343d96: Status 404 returned error can't find the container with id e09e065931494d3ba676b6c94d088c34d670811ef1849e321bdfdd21bb343d96 Jan 31 15:09:40 crc kubenswrapper[4763]: I0131 15:09:40.942115 4763 generic.go:334] "Generic (PLEG): container finished" podID="b6835994-86f5-4950-b010-780530fceffe" containerID="b2086f35b41938e43e63cfec8e02d1afade16cf2d587d3c3dd70dc3e888f4512" exitCode=0 Jan 31 15:09:40 crc kubenswrapper[4763]: I0131 15:09:40.942224 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr" event={"ID":"b6835994-86f5-4950-b010-780530fceffe","Type":"ContainerDied","Data":"b2086f35b41938e43e63cfec8e02d1afade16cf2d587d3c3dd70dc3e888f4512"} Jan 31 15:09:40 crc kubenswrapper[4763]: I0131 15:09:40.942254 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr" event={"ID":"b6835994-86f5-4950-b010-780530fceffe","Type":"ContainerStarted","Data":"e09e065931494d3ba676b6c94d088c34d670811ef1849e321bdfdd21bb343d96"} Jan 31 15:09:41 crc kubenswrapper[4763]: I0131 15:09:41.948107 4763 generic.go:334] "Generic (PLEG): container finished" podID="b6835994-86f5-4950-b010-780530fceffe" containerID="f6ff0f7e969478dc2260b302e7b19458fa0b15da6240959827dafc4920332836" exitCode=0 Jan 31 15:09:41 crc kubenswrapper[4763]: I0131 15:09:41.948207 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr" event={"ID":"b6835994-86f5-4950-b010-780530fceffe","Type":"ContainerDied","Data":"f6ff0f7e969478dc2260b302e7b19458fa0b15da6240959827dafc4920332836"} Jan 31 15:09:41 crc kubenswrapper[4763]: I0131 15:09:41.958770 4763 generic.go:334] "Generic (PLEG): container finished" podID="196347d8-7892-4b32-8bc2-0127439a95f0" containerID="84fd8b869419477646b3303789d7d4ce59277c7782af2bb140c22752aadb6987" exitCode=0 Jan 31 15:09:41 crc kubenswrapper[4763]: I0131 15:09:41.958831 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/root-account-create-update-jh8vr" event={"ID":"196347d8-7892-4b32-8bc2-0127439a95f0","Type":"ContainerDied","Data":"84fd8b869419477646b3303789d7d4ce59277c7782af2bb140c22752aadb6987"} Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.177768 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.178409 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.643203 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-jh8vr" Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.738394 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7lf9s"] Jan 31 15:09:44 crc kubenswrapper[4763]: E0131 15:09:44.738630 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="196347d8-7892-4b32-8bc2-0127439a95f0" containerName="mariadb-account-create-update" Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.738641 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="196347d8-7892-4b32-8bc2-0127439a95f0" containerName="mariadb-account-create-update" Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.738790 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="196347d8-7892-4b32-8bc2-0127439a95f0" containerName="mariadb-account-create-update" Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.739714 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7lf9s" Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.749160 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7lf9s"] Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.807135 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kwc8\" (UniqueName: \"kubernetes.io/projected/196347d8-7892-4b32-8bc2-0127439a95f0-kube-api-access-8kwc8\") pod \"196347d8-7892-4b32-8bc2-0127439a95f0\" (UID: \"196347d8-7892-4b32-8bc2-0127439a95f0\") " Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.807259 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/196347d8-7892-4b32-8bc2-0127439a95f0-operator-scripts\") pod \"196347d8-7892-4b32-8bc2-0127439a95f0\" (UID: \"196347d8-7892-4b32-8bc2-0127439a95f0\") " Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.808005 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/196347d8-7892-4b32-8bc2-0127439a95f0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "196347d8-7892-4b32-8bc2-0127439a95f0" (UID: "196347d8-7892-4b32-8bc2-0127439a95f0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.813936 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/196347d8-7892-4b32-8bc2-0127439a95f0-kube-api-access-8kwc8" (OuterVolumeSpecName: "kube-api-access-8kwc8") pod "196347d8-7892-4b32-8bc2-0127439a95f0" (UID: "196347d8-7892-4b32-8bc2-0127439a95f0"). InnerVolumeSpecName "kube-api-access-8kwc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.909054 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92768ea9-03ef-4d26-8cdf-dfc9f45575be-utilities\") pod \"redhat-operators-7lf9s\" (UID: \"92768ea9-03ef-4d26-8cdf-dfc9f45575be\") " pod="openshift-marketplace/redhat-operators-7lf9s" Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.909127 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92768ea9-03ef-4d26-8cdf-dfc9f45575be-catalog-content\") pod \"redhat-operators-7lf9s\" (UID: \"92768ea9-03ef-4d26-8cdf-dfc9f45575be\") " pod="openshift-marketplace/redhat-operators-7lf9s" Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.909169 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6hqc\" (UniqueName: \"kubernetes.io/projected/92768ea9-03ef-4d26-8cdf-dfc9f45575be-kube-api-access-c6hqc\") pod \"redhat-operators-7lf9s\" (UID: \"92768ea9-03ef-4d26-8cdf-dfc9f45575be\") " pod="openshift-marketplace/redhat-operators-7lf9s" Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.909236 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kwc8\" (UniqueName: \"kubernetes.io/projected/196347d8-7892-4b32-8bc2-0127439a95f0-kube-api-access-8kwc8\") on node \"crc\" DevicePath \"\"" Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.909247 4763 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/196347d8-7892-4b32-8bc2-0127439a95f0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.980426 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/root-account-create-update-jh8vr" event={"ID":"196347d8-7892-4b32-8bc2-0127439a95f0","Type":"ContainerDied","Data":"97e182eb74bedfe8eaf315c1f144ec59ed2098d566229b1fb8133b578dde591b"} Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.980463 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97e182eb74bedfe8eaf315c1f144ec59ed2098d566229b1fb8133b578dde591b" Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.980482 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-jh8vr" Jan 31 15:09:45 crc kubenswrapper[4763]: I0131 15:09:45.010044 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92768ea9-03ef-4d26-8cdf-dfc9f45575be-utilities\") pod \"redhat-operators-7lf9s\" (UID: \"92768ea9-03ef-4d26-8cdf-dfc9f45575be\") " pod="openshift-marketplace/redhat-operators-7lf9s" Jan 31 15:09:45 crc kubenswrapper[4763]: I0131 15:09:45.010122 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92768ea9-03ef-4d26-8cdf-dfc9f45575be-catalog-content\") pod \"redhat-operators-7lf9s\" (UID: \"92768ea9-03ef-4d26-8cdf-dfc9f45575be\") " pod="openshift-marketplace/redhat-operators-7lf9s" Jan 31 15:09:45 crc kubenswrapper[4763]: I0131 15:09:45.010167 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6hqc\" (UniqueName: \"kubernetes.io/projected/92768ea9-03ef-4d26-8cdf-dfc9f45575be-kube-api-access-c6hqc\") pod \"redhat-operators-7lf9s\" (UID: \"92768ea9-03ef-4d26-8cdf-dfc9f45575be\") " pod="openshift-marketplace/redhat-operators-7lf9s" Jan 31 15:09:45 crc kubenswrapper[4763]: I0131 15:09:45.010654 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92768ea9-03ef-4d26-8cdf-dfc9f45575be-utilities\") pod \"redhat-operators-7lf9s\" (UID: \"92768ea9-03ef-4d26-8cdf-dfc9f45575be\") " pod="openshift-marketplace/redhat-operators-7lf9s" Jan 31 15:09:45 crc kubenswrapper[4763]: I0131 15:09:45.010670 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92768ea9-03ef-4d26-8cdf-dfc9f45575be-catalog-content\") pod \"redhat-operators-7lf9s\" (UID: \"92768ea9-03ef-4d26-8cdf-dfc9f45575be\") " pod="openshift-marketplace/redhat-operators-7lf9s" Jan 31 15:09:45 crc kubenswrapper[4763]: I0131 15:09:45.031902 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6hqc\" (UniqueName: \"kubernetes.io/projected/92768ea9-03ef-4d26-8cdf-dfc9f45575be-kube-api-access-c6hqc\") pod \"redhat-operators-7lf9s\" (UID: \"92768ea9-03ef-4d26-8cdf-dfc9f45575be\") " pod="openshift-marketplace/redhat-operators-7lf9s" Jan 31 15:09:45 crc kubenswrapper[4763]: I0131 15:09:45.057156 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7lf9s" Jan 31 15:09:45 crc kubenswrapper[4763]: I0131 15:09:45.535546 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7lf9s"] Jan 31 15:09:45 crc kubenswrapper[4763]: W0131 15:09:45.539248 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92768ea9_03ef_4d26_8cdf_dfc9f45575be.slice/crio-cf5fe5ed60cf52bb8acb61df24499edf923c28d5275c3d448151c826d9a8c944 WatchSource:0}: Error finding container cf5fe5ed60cf52bb8acb61df24499edf923c28d5275c3d448151c826d9a8c944: Status 404 returned error can't find the container with id cf5fe5ed60cf52bb8acb61df24499edf923c28d5275c3d448151c826d9a8c944 Jan 31 15:09:45 crc kubenswrapper[4763]: I0131 15:09:45.986938 4763 generic.go:334] "Generic (PLEG): container finished" podID="92768ea9-03ef-4d26-8cdf-dfc9f45575be" containerID="073e38d7a622a2b02edd186fff7b67bf5dff918525a27c9bdca3bafb1385dea0" exitCode=0 Jan 31 15:09:45 crc kubenswrapper[4763]: I0131 15:09:45.987095 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lf9s" event={"ID":"92768ea9-03ef-4d26-8cdf-dfc9f45575be","Type":"ContainerDied","Data":"073e38d7a622a2b02edd186fff7b67bf5dff918525a27c9bdca3bafb1385dea0"} Jan 31 15:09:45 crc kubenswrapper[4763]: I0131 15:09:45.987218 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lf9s" event={"ID":"92768ea9-03ef-4d26-8cdf-dfc9f45575be","Type":"ContainerStarted","Data":"cf5fe5ed60cf52bb8acb61df24499edf923c28d5275c3d448151c826d9a8c944"} Jan 31 15:09:45 crc kubenswrapper[4763]: I0131 15:09:45.989773 4763 generic.go:334] "Generic (PLEG): container finished" podID="b6835994-86f5-4950-b010-780530fceffe" containerID="7ace12af8c0e8ac2c2939cceefc9553793b2092628e79e5cd470deb7ce570d8c" exitCode=0 Jan 31 15:09:45 crc kubenswrapper[4763]: I0131 15:09:45.989800 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr" event={"ID":"b6835994-86f5-4950-b010-780530fceffe","Type":"ContainerDied","Data":"7ace12af8c0e8ac2c2939cceefc9553793b2092628e79e5cd470deb7ce570d8c"} Jan 31 15:09:46 crc kubenswrapper[4763]: I0131 15:09:46.997719 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lf9s" event={"ID":"92768ea9-03ef-4d26-8cdf-dfc9f45575be","Type":"ContainerStarted","Data":"5a540798457e4c3c700e89a882a83e3778cea51f9eae6a9767248d8876ca9967"} Jan 31 15:09:47 crc kubenswrapper[4763]: I0131 15:09:47.438254 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr" Jan 31 15:09:47 crc kubenswrapper[4763]: I0131 15:09:47.542595 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b6835994-86f5-4950-b010-780530fceffe-util\") pod \"b6835994-86f5-4950-b010-780530fceffe\" (UID: \"b6835994-86f5-4950-b010-780530fceffe\") " Jan 31 15:09:47 crc kubenswrapper[4763]: I0131 15:09:47.542968 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttfwn\" (UniqueName: \"kubernetes.io/projected/b6835994-86f5-4950-b010-780530fceffe-kube-api-access-ttfwn\") pod \"b6835994-86f5-4950-b010-780530fceffe\" (UID: \"b6835994-86f5-4950-b010-780530fceffe\") " Jan 31 15:09:47 crc kubenswrapper[4763]: I0131 15:09:47.543031 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b6835994-86f5-4950-b010-780530fceffe-bundle\") pod \"b6835994-86f5-4950-b010-780530fceffe\" (UID: \"b6835994-86f5-4950-b010-780530fceffe\") " Jan 31 15:09:47 crc kubenswrapper[4763]: I0131 15:09:47.543632 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6835994-86f5-4950-b010-780530fceffe-bundle" (OuterVolumeSpecName: "bundle") pod "b6835994-86f5-4950-b010-780530fceffe" (UID: "b6835994-86f5-4950-b010-780530fceffe"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:09:47 crc kubenswrapper[4763]: I0131 15:09:47.551337 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6835994-86f5-4950-b010-780530fceffe-kube-api-access-ttfwn" (OuterVolumeSpecName: "kube-api-access-ttfwn") pod "b6835994-86f5-4950-b010-780530fceffe" (UID: "b6835994-86f5-4950-b010-780530fceffe"). InnerVolumeSpecName "kube-api-access-ttfwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:09:47 crc kubenswrapper[4763]: I0131 15:09:47.557047 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6835994-86f5-4950-b010-780530fceffe-util" (OuterVolumeSpecName: "util") pod "b6835994-86f5-4950-b010-780530fceffe" (UID: "b6835994-86f5-4950-b010-780530fceffe"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:09:47 crc kubenswrapper[4763]: I0131 15:09:47.645015 4763 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b6835994-86f5-4950-b010-780530fceffe-util\") on node \"crc\" DevicePath \"\"" Jan 31 15:09:47 crc kubenswrapper[4763]: I0131 15:09:47.645062 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttfwn\" (UniqueName: \"kubernetes.io/projected/b6835994-86f5-4950-b010-780530fceffe-kube-api-access-ttfwn\") on node \"crc\" DevicePath \"\"" Jan 31 15:09:47 crc kubenswrapper[4763]: I0131 15:09:47.645077 4763 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b6835994-86f5-4950-b010-780530fceffe-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:09:48 crc kubenswrapper[4763]: I0131 15:09:48.004643 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr" Jan 31 15:09:48 crc kubenswrapper[4763]: I0131 15:09:48.004637 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr" event={"ID":"b6835994-86f5-4950-b010-780530fceffe","Type":"ContainerDied","Data":"e09e065931494d3ba676b6c94d088c34d670811ef1849e321bdfdd21bb343d96"} Jan 31 15:09:48 crc kubenswrapper[4763]: I0131 15:09:48.004784 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e09e065931494d3ba676b6c94d088c34d670811ef1849e321bdfdd21bb343d96" Jan 31 15:09:48 crc kubenswrapper[4763]: I0131 15:09:48.009136 4763 generic.go:334] "Generic (PLEG): container finished" podID="92768ea9-03ef-4d26-8cdf-dfc9f45575be" containerID="5a540798457e4c3c700e89a882a83e3778cea51f9eae6a9767248d8876ca9967" exitCode=0 Jan 31 15:09:48 crc kubenswrapper[4763]: I0131 15:09:48.009164 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lf9s" event={"ID":"92768ea9-03ef-4d26-8cdf-dfc9f45575be","Type":"ContainerDied","Data":"5a540798457e4c3c700e89a882a83e3778cea51f9eae6a9767248d8876ca9967"} Jan 31 15:09:49 crc kubenswrapper[4763]: I0131 15:09:49.018271 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lf9s" event={"ID":"92768ea9-03ef-4d26-8cdf-dfc9f45575be","Type":"ContainerStarted","Data":"353034f3575668335c53a61718a3785c11949a3c5bc89f04bc4c7b1cdde9725f"} Jan 31 15:09:49 crc kubenswrapper[4763]: I0131 15:09:49.038461 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7lf9s" podStartSLOduration=2.504772579 podStartE2EDuration="5.038440601s" podCreationTimestamp="2026-01-31 15:09:44 +0000 UTC" firstStartedPulling="2026-01-31 15:09:45.988371821 +0000 UTC m=+905.743110124" lastFinishedPulling="2026-01-31 15:09:48.522039853 +0000 UTC m=+908.276778146" observedRunningTime="2026-01-31 15:09:49.035355599 +0000 UTC m=+908.790093892" watchObservedRunningTime="2026-01-31 15:09:49.038440601 +0000 UTC m=+908.793178894" Jan 31 15:09:49 crc kubenswrapper[4763]: I0131 15:09:49.233035 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/openstack-galera-2" podUID="cd0d5ccb-1d59-428e-9a53-17427cd0e5dc" containerName="galera" probeResult="failure" output=< Jan 31 15:09:49 crc kubenswrapper[4763]: wsrep_local_state_comment (Donor/Desynced) differs from Synced Jan 31 15:09:49 crc kubenswrapper[4763]: > Jan 31 15:09:53 crc kubenswrapper[4763]: I0131 15:09:53.981066 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:54 crc kubenswrapper[4763]: I0131 15:09:54.093405 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:54 crc kubenswrapper[4763]: I0131 15:09:54.561557 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-2ltrp"] Jan 31 15:09:54 crc kubenswrapper[4763]: E0131 15:09:54.562119 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6835994-86f5-4950-b010-780530fceffe" containerName="extract" Jan 31 15:09:54 crc kubenswrapper[4763]: I0131 15:09:54.562137 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6835994-86f5-4950-b010-780530fceffe" containerName="extract" Jan 31 15:09:54 crc kubenswrapper[4763]: E0131 15:09:54.562158 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6835994-86f5-4950-b010-780530fceffe" containerName="util" Jan 31 15:09:54 crc kubenswrapper[4763]: I0131 15:09:54.562166 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6835994-86f5-4950-b010-780530fceffe" containerName="util" Jan 31 15:09:54 crc kubenswrapper[4763]: E0131 15:09:54.562184 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6835994-86f5-4950-b010-780530fceffe" containerName="pull" Jan 31 15:09:54 crc kubenswrapper[4763]: I0131 15:09:54.562194 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6835994-86f5-4950-b010-780530fceffe" containerName="pull" Jan 31 15:09:54 crc kubenswrapper[4763]: I0131 15:09:54.562341 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6835994-86f5-4950-b010-780530fceffe" containerName="extract" Jan 31 15:09:54 crc kubenswrapper[4763]: I0131 15:09:54.562834 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-2ltrp" Jan 31 15:09:54 crc kubenswrapper[4763]: I0131 15:09:54.565311 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-dockercfg-l92vh" Jan 31 15:09:54 crc kubenswrapper[4763]: I0131 15:09:54.573265 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-2ltrp"] Jan 31 15:09:54 crc kubenswrapper[4763]: I0131 15:09:54.732498 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmq4w\" (UniqueName: \"kubernetes.io/projected/8225c1b7-e70c-4eac-8c03-c85f86ccba6b-kube-api-access-tmq4w\") pod \"rabbitmq-cluster-operator-779fc9694b-2ltrp\" (UID: \"8225c1b7-e70c-4eac-8c03-c85f86ccba6b\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-2ltrp" Jan 31 15:09:54 crc kubenswrapper[4763]: I0131 15:09:54.834235 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmq4w\" (UniqueName: \"kubernetes.io/projected/8225c1b7-e70c-4eac-8c03-c85f86ccba6b-kube-api-access-tmq4w\") pod \"rabbitmq-cluster-operator-779fc9694b-2ltrp\" (UID: \"8225c1b7-e70c-4eac-8c03-c85f86ccba6b\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-2ltrp" Jan 31 15:09:54 crc kubenswrapper[4763]: I0131 15:09:54.858523 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmq4w\" (UniqueName: \"kubernetes.io/projected/8225c1b7-e70c-4eac-8c03-c85f86ccba6b-kube-api-access-tmq4w\") pod \"rabbitmq-cluster-operator-779fc9694b-2ltrp\" (UID: \"8225c1b7-e70c-4eac-8c03-c85f86ccba6b\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-2ltrp" Jan 31 15:09:54 crc kubenswrapper[4763]: I0131 15:09:54.877822 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-2ltrp" Jan 31 15:09:55 crc kubenswrapper[4763]: I0131 15:09:55.058327 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7lf9s" Jan 31 15:09:55 crc kubenswrapper[4763]: I0131 15:09:55.058581 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7lf9s" Jan 31 15:09:55 crc kubenswrapper[4763]: I0131 15:09:55.301165 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-2ltrp"] Jan 31 15:09:55 crc kubenswrapper[4763]: W0131 15:09:55.302432 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod8225c1b7_e70c_4eac_8c03_c85f86ccba6b.slice/crio-d103da8d061f8549ac02ebbf2ceea955df40ece3aa53193ad4ff0a72f1621ad9 WatchSource:0}: Error finding container d103da8d061f8549ac02ebbf2ceea955df40ece3aa53193ad4ff0a72f1621ad9: Status 404 returned error can't find the container with id d103da8d061f8549ac02ebbf2ceea955df40ece3aa53193ad4ff0a72f1621ad9 Jan 31 15:09:56 crc kubenswrapper[4763]: I0131 15:09:56.059004 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-2ltrp" event={"ID":"8225c1b7-e70c-4eac-8c03-c85f86ccba6b","Type":"ContainerStarted","Data":"d103da8d061f8549ac02ebbf2ceea955df40ece3aa53193ad4ff0a72f1621ad9"} Jan 31 15:09:56 crc kubenswrapper[4763]: I0131 15:09:56.101735 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7lf9s" podUID="92768ea9-03ef-4d26-8cdf-dfc9f45575be" containerName="registry-server" probeResult="failure" output=< Jan 31 15:09:56 crc kubenswrapper[4763]: timeout: failed to connect service ":50051" within 1s Jan 31 15:09:56 crc kubenswrapper[4763]: > Jan 31 15:09:57 crc kubenswrapper[4763]: I0131 15:09:57.519424 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:57 crc kubenswrapper[4763]: I0131 15:09:57.580619 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:59 crc kubenswrapper[4763]: I0131 15:09:59.082565 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-2ltrp" event={"ID":"8225c1b7-e70c-4eac-8c03-c85f86ccba6b","Type":"ContainerStarted","Data":"2324d8a9ed8f2c44f3ad6220287cb0c518cf7ffab76c02c0cfa9b226cf105495"} Jan 31 15:09:59 crc kubenswrapper[4763]: I0131 15:09:59.106398 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-2ltrp" podStartSLOduration=1.658153131 podStartE2EDuration="5.10636792s" podCreationTimestamp="2026-01-31 15:09:54 +0000 UTC" firstStartedPulling="2026-01-31 15:09:55.304776742 +0000 UTC m=+915.059515035" lastFinishedPulling="2026-01-31 15:09:58.752991511 +0000 UTC m=+918.507729824" observedRunningTime="2026-01-31 15:09:59.095068662 +0000 UTC m=+918.849806985" watchObservedRunningTime="2026-01-31 15:09:59.10636792 +0000 UTC m=+918.861106243" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.588208 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/rabbitmq-server-0"] Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.589652 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.591950 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"rabbitmq-default-user" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.592014 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"rabbitmq-plugins-conf" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.592178 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"rabbitmq-server-conf" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.592737 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"rabbitmq-server-dockercfg-r5k9h" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.592882 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"rabbitmq-erlang-cookie" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.612856 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/rabbitmq-server-0"] Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.739516 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dee0d43f-8ff0-4094-9833-92cda38ee182-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.739586 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dee0d43f-8ff0-4094-9833-92cda38ee182-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.739777 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cr98\" (UniqueName: \"kubernetes.io/projected/dee0d43f-8ff0-4094-9833-92cda38ee182-kube-api-access-4cr98\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.739896 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dee0d43f-8ff0-4094-9833-92cda38ee182-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.739940 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dee0d43f-8ff0-4094-9833-92cda38ee182-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.739965 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dee0d43f-8ff0-4094-9833-92cda38ee182-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.739988 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b0c2808c-e218-4e2e-a8c1-f733c80242ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b0c2808c-e218-4e2e-a8c1-f733c80242ac\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.740018 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dee0d43f-8ff0-4094-9833-92cda38ee182-pod-info\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.841246 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dee0d43f-8ff0-4094-9833-92cda38ee182-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.841301 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dee0d43f-8ff0-4094-9833-92cda38ee182-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.841324 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b0c2808c-e218-4e2e-a8c1-f733c80242ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b0c2808c-e218-4e2e-a8c1-f733c80242ac\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.841369 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dee0d43f-8ff0-4094-9833-92cda38ee182-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.841389 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dee0d43f-8ff0-4094-9833-92cda38ee182-pod-info\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.841416 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dee0d43f-8ff0-4094-9833-92cda38ee182-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.841438 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dee0d43f-8ff0-4094-9833-92cda38ee182-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.841478 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cr98\" (UniqueName: \"kubernetes.io/projected/dee0d43f-8ff0-4094-9833-92cda38ee182-kube-api-access-4cr98\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.842110 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dee0d43f-8ff0-4094-9833-92cda38ee182-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.842315 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dee0d43f-8ff0-4094-9833-92cda38ee182-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.844376 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dee0d43f-8ff0-4094-9833-92cda38ee182-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.848261 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dee0d43f-8ff0-4094-9833-92cda38ee182-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.849072 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dee0d43f-8ff0-4094-9833-92cda38ee182-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.849889 4763 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.849915 4763 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b0c2808c-e218-4e2e-a8c1-f733c80242ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b0c2808c-e218-4e2e-a8c1-f733c80242ac\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a5d563a84855d70dea8af3b1d20a098856ea2dfb7695437899fa690ba4d17c18/globalmount\"" pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.856651 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cr98\" (UniqueName: \"kubernetes.io/projected/dee0d43f-8ff0-4094-9833-92cda38ee182-kube-api-access-4cr98\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.867477 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dee0d43f-8ff0-4094-9833-92cda38ee182-pod-info\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.872219 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b0c2808c-e218-4e2e-a8c1-f733c80242ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b0c2808c-e218-4e2e-a8c1-f733c80242ac\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.922455 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:02 crc kubenswrapper[4763]: I0131 15:10:02.391105 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/rabbitmq-server-0"] Jan 31 15:10:02 crc kubenswrapper[4763]: W0131 15:10:02.399936 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddee0d43f_8ff0_4094_9833_92cda38ee182.slice/crio-16302b32ea7a0164c4878e259db72d48663e1b565d6f4b36531750d5e18e8803 WatchSource:0}: Error finding container 16302b32ea7a0164c4878e259db72d48663e1b565d6f4b36531750d5e18e8803: Status 404 returned error can't find the container with id 16302b32ea7a0164c4878e259db72d48663e1b565d6f4b36531750d5e18e8803 Jan 31 15:10:03 crc kubenswrapper[4763]: I0131 15:10:03.113144 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/rabbitmq-server-0" event={"ID":"dee0d43f-8ff0-4094-9833-92cda38ee182","Type":"ContainerStarted","Data":"16302b32ea7a0164c4878e259db72d48663e1b565d6f4b36531750d5e18e8803"} Jan 31 15:10:03 crc kubenswrapper[4763]: I0131 15:10:03.131125 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-index-k7dfb"] Jan 31 15:10:03 crc kubenswrapper[4763]: I0131 15:10:03.132148 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-k7dfb" Jan 31 15:10:03 crc kubenswrapper[4763]: I0131 15:10:03.135198 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-index-dockercfg-rs9zg" Jan 31 15:10:03 crc kubenswrapper[4763]: I0131 15:10:03.139614 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-k7dfb"] Jan 31 15:10:03 crc kubenswrapper[4763]: I0131 15:10:03.282620 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf5t2\" (UniqueName: \"kubernetes.io/projected/15e174ce-d52f-4b1f-a00f-97624902794c-kube-api-access-hf5t2\") pod \"keystone-operator-index-k7dfb\" (UID: \"15e174ce-d52f-4b1f-a00f-97624902794c\") " pod="openstack-operators/keystone-operator-index-k7dfb" Jan 31 15:10:03 crc kubenswrapper[4763]: I0131 15:10:03.384085 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf5t2\" (UniqueName: \"kubernetes.io/projected/15e174ce-d52f-4b1f-a00f-97624902794c-kube-api-access-hf5t2\") pod \"keystone-operator-index-k7dfb\" (UID: \"15e174ce-d52f-4b1f-a00f-97624902794c\") " pod="openstack-operators/keystone-operator-index-k7dfb" Jan 31 15:10:03 crc kubenswrapper[4763]: I0131 15:10:03.404647 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf5t2\" (UniqueName: \"kubernetes.io/projected/15e174ce-d52f-4b1f-a00f-97624902794c-kube-api-access-hf5t2\") pod \"keystone-operator-index-k7dfb\" (UID: \"15e174ce-d52f-4b1f-a00f-97624902794c\") " pod="openstack-operators/keystone-operator-index-k7dfb" Jan 31 15:10:03 crc kubenswrapper[4763]: I0131 15:10:03.508651 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-k7dfb" Jan 31 15:10:03 crc kubenswrapper[4763]: I0131 15:10:03.978664 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-k7dfb"] Jan 31 15:10:04 crc kubenswrapper[4763]: I0131 15:10:04.120953 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-k7dfb" event={"ID":"15e174ce-d52f-4b1f-a00f-97624902794c","Type":"ContainerStarted","Data":"a6060687b04652b9a0f6ae1dd3967200dafd729292b1e7995df8cfdf4f628cac"} Jan 31 15:10:05 crc kubenswrapper[4763]: I0131 15:10:05.121048 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7lf9s" Jan 31 15:10:05 crc kubenswrapper[4763]: I0131 15:10:05.195522 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7lf9s" Jan 31 15:10:07 crc kubenswrapper[4763]: I0131 15:10:07.929491 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-index-k7dfb"] Jan 31 15:10:08 crc kubenswrapper[4763]: I0131 15:10:08.730058 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-index-njgcq"] Jan 31 15:10:08 crc kubenswrapper[4763]: I0131 15:10:08.731314 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-njgcq" Jan 31 15:10:08 crc kubenswrapper[4763]: I0131 15:10:08.742253 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-njgcq"] Jan 31 15:10:08 crc kubenswrapper[4763]: I0131 15:10:08.864830 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsrb7\" (UniqueName: \"kubernetes.io/projected/191c97ac-f003-4a51-8f06-395adf3ac8a7-kube-api-access-wsrb7\") pod \"keystone-operator-index-njgcq\" (UID: \"191c97ac-f003-4a51-8f06-395adf3ac8a7\") " pod="openstack-operators/keystone-operator-index-njgcq" Jan 31 15:10:08 crc kubenswrapper[4763]: I0131 15:10:08.966515 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsrb7\" (UniqueName: \"kubernetes.io/projected/191c97ac-f003-4a51-8f06-395adf3ac8a7-kube-api-access-wsrb7\") pod \"keystone-operator-index-njgcq\" (UID: \"191c97ac-f003-4a51-8f06-395adf3ac8a7\") " pod="openstack-operators/keystone-operator-index-njgcq" Jan 31 15:10:08 crc kubenswrapper[4763]: I0131 15:10:08.997726 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsrb7\" (UniqueName: \"kubernetes.io/projected/191c97ac-f003-4a51-8f06-395adf3ac8a7-kube-api-access-wsrb7\") pod \"keystone-operator-index-njgcq\" (UID: \"191c97ac-f003-4a51-8f06-395adf3ac8a7\") " pod="openstack-operators/keystone-operator-index-njgcq" Jan 31 15:10:09 crc kubenswrapper[4763]: I0131 15:10:09.088918 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-njgcq" Jan 31 15:10:09 crc kubenswrapper[4763]: I0131 15:10:09.167647 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-k7dfb" event={"ID":"15e174ce-d52f-4b1f-a00f-97624902794c","Type":"ContainerStarted","Data":"1e0db8528a0a9866c9254bb8b0bea0592be92d01f5d41073949b5dce4d31f647"} Jan 31 15:10:09 crc kubenswrapper[4763]: I0131 15:10:09.167968 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/keystone-operator-index-k7dfb" podUID="15e174ce-d52f-4b1f-a00f-97624902794c" containerName="registry-server" containerID="cri-o://1e0db8528a0a9866c9254bb8b0bea0592be92d01f5d41073949b5dce4d31f647" gracePeriod=2 Jan 31 15:10:09 crc kubenswrapper[4763]: I0131 15:10:09.192670 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-index-k7dfb" podStartSLOduration=2.314011014 podStartE2EDuration="6.192630118s" podCreationTimestamp="2026-01-31 15:10:03 +0000 UTC" firstStartedPulling="2026-01-31 15:10:03.992598637 +0000 UTC m=+923.747336930" lastFinishedPulling="2026-01-31 15:10:07.871217741 +0000 UTC m=+927.625956034" observedRunningTime="2026-01-31 15:10:09.191289052 +0000 UTC m=+928.946027365" watchObservedRunningTime="2026-01-31 15:10:09.192630118 +0000 UTC m=+928.947368421" Jan 31 15:10:09 crc kubenswrapper[4763]: I0131 15:10:09.573819 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-njgcq"] Jan 31 15:10:09 crc kubenswrapper[4763]: I0131 15:10:09.615175 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-k7dfb" Jan 31 15:10:09 crc kubenswrapper[4763]: I0131 15:10:09.780184 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hf5t2\" (UniqueName: \"kubernetes.io/projected/15e174ce-d52f-4b1f-a00f-97624902794c-kube-api-access-hf5t2\") pod \"15e174ce-d52f-4b1f-a00f-97624902794c\" (UID: \"15e174ce-d52f-4b1f-a00f-97624902794c\") " Jan 31 15:10:09 crc kubenswrapper[4763]: I0131 15:10:09.792885 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15e174ce-d52f-4b1f-a00f-97624902794c-kube-api-access-hf5t2" (OuterVolumeSpecName: "kube-api-access-hf5t2") pod "15e174ce-d52f-4b1f-a00f-97624902794c" (UID: "15e174ce-d52f-4b1f-a00f-97624902794c"). InnerVolumeSpecName "kube-api-access-hf5t2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:10:09 crc kubenswrapper[4763]: I0131 15:10:09.881264 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hf5t2\" (UniqueName: \"kubernetes.io/projected/15e174ce-d52f-4b1f-a00f-97624902794c-kube-api-access-hf5t2\") on node \"crc\" DevicePath \"\"" Jan 31 15:10:09 crc kubenswrapper[4763]: I0131 15:10:09.927002 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7lf9s"] Jan 31 15:10:09 crc kubenswrapper[4763]: I0131 15:10:09.927320 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7lf9s" podUID="92768ea9-03ef-4d26-8cdf-dfc9f45575be" containerName="registry-server" containerID="cri-o://353034f3575668335c53a61718a3785c11949a3c5bc89f04bc4c7b1cdde9725f" gracePeriod=2 Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.176393 4763 generic.go:334] "Generic (PLEG): container finished" podID="92768ea9-03ef-4d26-8cdf-dfc9f45575be" containerID="353034f3575668335c53a61718a3785c11949a3c5bc89f04bc4c7b1cdde9725f" exitCode=0 Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.176638 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lf9s" event={"ID":"92768ea9-03ef-4d26-8cdf-dfc9f45575be","Type":"ContainerDied","Data":"353034f3575668335c53a61718a3785c11949a3c5bc89f04bc4c7b1cdde9725f"} Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.177648 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/rabbitmq-server-0" event={"ID":"dee0d43f-8ff0-4094-9833-92cda38ee182","Type":"ContainerStarted","Data":"6811b875a2287d4113a9be20ece01fcce4b87486deb0296107994f35e34ccfcc"} Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.180139 4763 generic.go:334] "Generic (PLEG): container finished" podID="15e174ce-d52f-4b1f-a00f-97624902794c" containerID="1e0db8528a0a9866c9254bb8b0bea0592be92d01f5d41073949b5dce4d31f647" exitCode=0 Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.180187 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-k7dfb" event={"ID":"15e174ce-d52f-4b1f-a00f-97624902794c","Type":"ContainerDied","Data":"1e0db8528a0a9866c9254bb8b0bea0592be92d01f5d41073949b5dce4d31f647"} Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.180203 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-k7dfb" event={"ID":"15e174ce-d52f-4b1f-a00f-97624902794c","Type":"ContainerDied","Data":"a6060687b04652b9a0f6ae1dd3967200dafd729292b1e7995df8cfdf4f628cac"} Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.180209 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-k7dfb" Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.180219 4763 scope.go:117] "RemoveContainer" containerID="1e0db8528a0a9866c9254bb8b0bea0592be92d01f5d41073949b5dce4d31f647" Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.187336 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-njgcq" event={"ID":"191c97ac-f003-4a51-8f06-395adf3ac8a7","Type":"ContainerStarted","Data":"44e02b26a38394eea42daed94a5d4412cc2b2eabb36f27f29c55f4fd859fae18"} Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.187368 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-njgcq" event={"ID":"191c97ac-f003-4a51-8f06-395adf3ac8a7","Type":"ContainerStarted","Data":"e4ca04660bfdcdcc2db1a3f893727cca54819a87089ba2336ca91feb5dda7059"} Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.214946 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-index-njgcq" podStartSLOduration=1.806248966 podStartE2EDuration="2.214930958s" podCreationTimestamp="2026-01-31 15:10:08 +0000 UTC" firstStartedPulling="2026-01-31 15:10:09.597870758 +0000 UTC m=+929.352609051" lastFinishedPulling="2026-01-31 15:10:10.00655274 +0000 UTC m=+929.761291043" observedRunningTime="2026-01-31 15:10:10.21388214 +0000 UTC m=+929.968620433" watchObservedRunningTime="2026-01-31 15:10:10.214930958 +0000 UTC m=+929.969669251" Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.217490 4763 scope.go:117] "RemoveContainer" containerID="1e0db8528a0a9866c9254bb8b0bea0592be92d01f5d41073949b5dce4d31f647" Jan 31 15:10:10 crc kubenswrapper[4763]: E0131 15:10:10.218428 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e0db8528a0a9866c9254bb8b0bea0592be92d01f5d41073949b5dce4d31f647\": container with ID starting with 1e0db8528a0a9866c9254bb8b0bea0592be92d01f5d41073949b5dce4d31f647 not found: ID does not exist" containerID="1e0db8528a0a9866c9254bb8b0bea0592be92d01f5d41073949b5dce4d31f647" Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.218483 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e0db8528a0a9866c9254bb8b0bea0592be92d01f5d41073949b5dce4d31f647"} err="failed to get container status \"1e0db8528a0a9866c9254bb8b0bea0592be92d01f5d41073949b5dce4d31f647\": rpc error: code = NotFound desc = could not find container \"1e0db8528a0a9866c9254bb8b0bea0592be92d01f5d41073949b5dce4d31f647\": container with ID starting with 1e0db8528a0a9866c9254bb8b0bea0592be92d01f5d41073949b5dce4d31f647 not found: ID does not exist" Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.236636 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-index-k7dfb"] Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.244446 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/keystone-operator-index-k7dfb"] Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.368815 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7lf9s" Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.489689 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92768ea9-03ef-4d26-8cdf-dfc9f45575be-utilities\") pod \"92768ea9-03ef-4d26-8cdf-dfc9f45575be\" (UID: \"92768ea9-03ef-4d26-8cdf-dfc9f45575be\") " Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.490091 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92768ea9-03ef-4d26-8cdf-dfc9f45575be-catalog-content\") pod \"92768ea9-03ef-4d26-8cdf-dfc9f45575be\" (UID: \"92768ea9-03ef-4d26-8cdf-dfc9f45575be\") " Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.490161 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6hqc\" (UniqueName: \"kubernetes.io/projected/92768ea9-03ef-4d26-8cdf-dfc9f45575be-kube-api-access-c6hqc\") pod \"92768ea9-03ef-4d26-8cdf-dfc9f45575be\" (UID: \"92768ea9-03ef-4d26-8cdf-dfc9f45575be\") " Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.490997 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92768ea9-03ef-4d26-8cdf-dfc9f45575be-utilities" (OuterVolumeSpecName: "utilities") pod "92768ea9-03ef-4d26-8cdf-dfc9f45575be" (UID: "92768ea9-03ef-4d26-8cdf-dfc9f45575be"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.495667 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92768ea9-03ef-4d26-8cdf-dfc9f45575be-kube-api-access-c6hqc" (OuterVolumeSpecName: "kube-api-access-c6hqc") pod "92768ea9-03ef-4d26-8cdf-dfc9f45575be" (UID: "92768ea9-03ef-4d26-8cdf-dfc9f45575be"). InnerVolumeSpecName "kube-api-access-c6hqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.592979 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92768ea9-03ef-4d26-8cdf-dfc9f45575be-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.593031 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6hqc\" (UniqueName: \"kubernetes.io/projected/92768ea9-03ef-4d26-8cdf-dfc9f45575be-kube-api-access-c6hqc\") on node \"crc\" DevicePath \"\"" Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.612986 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92768ea9-03ef-4d26-8cdf-dfc9f45575be-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92768ea9-03ef-4d26-8cdf-dfc9f45575be" (UID: "92768ea9-03ef-4d26-8cdf-dfc9f45575be"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.694440 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92768ea9-03ef-4d26-8cdf-dfc9f45575be-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:10:11 crc kubenswrapper[4763]: I0131 15:10:11.049662 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15e174ce-d52f-4b1f-a00f-97624902794c" path="/var/lib/kubelet/pods/15e174ce-d52f-4b1f-a00f-97624902794c/volumes" Jan 31 15:10:11 crc kubenswrapper[4763]: I0131 15:10:11.196272 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7lf9s" Jan 31 15:10:11 crc kubenswrapper[4763]: I0131 15:10:11.196288 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lf9s" event={"ID":"92768ea9-03ef-4d26-8cdf-dfc9f45575be","Type":"ContainerDied","Data":"cf5fe5ed60cf52bb8acb61df24499edf923c28d5275c3d448151c826d9a8c944"} Jan 31 15:10:11 crc kubenswrapper[4763]: I0131 15:10:11.196359 4763 scope.go:117] "RemoveContainer" containerID="353034f3575668335c53a61718a3785c11949a3c5bc89f04bc4c7b1cdde9725f" Jan 31 15:10:11 crc kubenswrapper[4763]: I0131 15:10:11.216490 4763 scope.go:117] "RemoveContainer" containerID="5a540798457e4c3c700e89a882a83e3778cea51f9eae6a9767248d8876ca9967" Jan 31 15:10:11 crc kubenswrapper[4763]: I0131 15:10:11.218730 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7lf9s"] Jan 31 15:10:11 crc kubenswrapper[4763]: I0131 15:10:11.223817 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7lf9s"] Jan 31 15:10:11 crc kubenswrapper[4763]: I0131 15:10:11.237246 4763 scope.go:117] "RemoveContainer" containerID="073e38d7a622a2b02edd186fff7b67bf5dff918525a27c9bdca3bafb1385dea0" Jan 31 15:10:13 crc kubenswrapper[4763]: I0131 15:10:13.050672 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92768ea9-03ef-4d26-8cdf-dfc9f45575be" path="/var/lib/kubelet/pods/92768ea9-03ef-4d26-8cdf-dfc9f45575be/volumes" Jan 31 15:10:14 crc kubenswrapper[4763]: I0131 15:10:14.177150 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:10:14 crc kubenswrapper[4763]: I0131 15:10:14.177259 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:10:19 crc kubenswrapper[4763]: I0131 15:10:19.090377 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/keystone-operator-index-njgcq" Jan 31 15:10:19 crc kubenswrapper[4763]: I0131 15:10:19.091043 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-index-njgcq" Jan 31 15:10:19 crc kubenswrapper[4763]: I0131 15:10:19.125765 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/keystone-operator-index-njgcq" Jan 31 15:10:19 crc kubenswrapper[4763]: I0131 15:10:19.285511 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-index-njgcq" Jan 31 15:10:22 crc kubenswrapper[4763]: I0131 15:10:22.838373 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f"] Jan 31 15:10:22 crc kubenswrapper[4763]: E0131 15:10:22.839042 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92768ea9-03ef-4d26-8cdf-dfc9f45575be" containerName="registry-server" Jan 31 15:10:22 crc kubenswrapper[4763]: I0131 15:10:22.839058 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="92768ea9-03ef-4d26-8cdf-dfc9f45575be" containerName="registry-server" Jan 31 15:10:22 crc kubenswrapper[4763]: E0131 15:10:22.839071 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15e174ce-d52f-4b1f-a00f-97624902794c" containerName="registry-server" Jan 31 15:10:22 crc kubenswrapper[4763]: I0131 15:10:22.839077 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="15e174ce-d52f-4b1f-a00f-97624902794c" containerName="registry-server" Jan 31 15:10:22 crc kubenswrapper[4763]: E0131 15:10:22.839092 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92768ea9-03ef-4d26-8cdf-dfc9f45575be" containerName="extract-content" Jan 31 15:10:22 crc kubenswrapper[4763]: I0131 15:10:22.839099 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="92768ea9-03ef-4d26-8cdf-dfc9f45575be" containerName="extract-content" Jan 31 15:10:22 crc kubenswrapper[4763]: E0131 15:10:22.839111 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92768ea9-03ef-4d26-8cdf-dfc9f45575be" containerName="extract-utilities" Jan 31 15:10:22 crc kubenswrapper[4763]: I0131 15:10:22.839118 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="92768ea9-03ef-4d26-8cdf-dfc9f45575be" containerName="extract-utilities" Jan 31 15:10:22 crc kubenswrapper[4763]: I0131 15:10:22.839233 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="92768ea9-03ef-4d26-8cdf-dfc9f45575be" containerName="registry-server" Jan 31 15:10:22 crc kubenswrapper[4763]: I0131 15:10:22.839247 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="15e174ce-d52f-4b1f-a00f-97624902794c" containerName="registry-server" Jan 31 15:10:22 crc kubenswrapper[4763]: I0131 15:10:22.840301 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f" Jan 31 15:10:22 crc kubenswrapper[4763]: I0131 15:10:22.844204 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-rrv7w" Jan 31 15:10:22 crc kubenswrapper[4763]: I0131 15:10:22.859653 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f"] Jan 31 15:10:23 crc kubenswrapper[4763]: I0131 15:10:23.010860 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6fb80892-b089-4dff-baa8-44ffdf6b9b84-bundle\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f\" (UID: \"6fb80892-b089-4dff-baa8-44ffdf6b9b84\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f" Jan 31 15:10:23 crc kubenswrapper[4763]: I0131 15:10:23.010966 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58z8p\" (UniqueName: \"kubernetes.io/projected/6fb80892-b089-4dff-baa8-44ffdf6b9b84-kube-api-access-58z8p\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f\" (UID: \"6fb80892-b089-4dff-baa8-44ffdf6b9b84\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f" Jan 31 15:10:23 crc kubenswrapper[4763]: I0131 15:10:23.011071 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6fb80892-b089-4dff-baa8-44ffdf6b9b84-util\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f\" (UID: \"6fb80892-b089-4dff-baa8-44ffdf6b9b84\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f" Jan 31 15:10:23 crc kubenswrapper[4763]: I0131 15:10:23.112029 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6fb80892-b089-4dff-baa8-44ffdf6b9b84-util\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f\" (UID: \"6fb80892-b089-4dff-baa8-44ffdf6b9b84\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f" Jan 31 15:10:23 crc kubenswrapper[4763]: I0131 15:10:23.112106 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6fb80892-b089-4dff-baa8-44ffdf6b9b84-bundle\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f\" (UID: \"6fb80892-b089-4dff-baa8-44ffdf6b9b84\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f" Jan 31 15:10:23 crc kubenswrapper[4763]: I0131 15:10:23.112152 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58z8p\" (UniqueName: \"kubernetes.io/projected/6fb80892-b089-4dff-baa8-44ffdf6b9b84-kube-api-access-58z8p\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f\" (UID: \"6fb80892-b089-4dff-baa8-44ffdf6b9b84\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f" Jan 31 15:10:23 crc kubenswrapper[4763]: I0131 15:10:23.112675 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6fb80892-b089-4dff-baa8-44ffdf6b9b84-util\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f\" (UID: \"6fb80892-b089-4dff-baa8-44ffdf6b9b84\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f" Jan 31 15:10:23 crc kubenswrapper[4763]: I0131 15:10:23.112749 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6fb80892-b089-4dff-baa8-44ffdf6b9b84-bundle\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f\" (UID: \"6fb80892-b089-4dff-baa8-44ffdf6b9b84\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f" Jan 31 15:10:23 crc kubenswrapper[4763]: I0131 15:10:23.135169 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58z8p\" (UniqueName: \"kubernetes.io/projected/6fb80892-b089-4dff-baa8-44ffdf6b9b84-kube-api-access-58z8p\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f\" (UID: \"6fb80892-b089-4dff-baa8-44ffdf6b9b84\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f" Jan 31 15:10:23 crc kubenswrapper[4763]: I0131 15:10:23.165194 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f" Jan 31 15:10:23 crc kubenswrapper[4763]: I0131 15:10:23.611421 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f"] Jan 31 15:10:24 crc kubenswrapper[4763]: I0131 15:10:24.291782 4763 generic.go:334] "Generic (PLEG): container finished" podID="6fb80892-b089-4dff-baa8-44ffdf6b9b84" containerID="a988c20b2fc824dfa2187fb0c43dcc497e6078364f55152dc25f4fb7f4a11a13" exitCode=0 Jan 31 15:10:24 crc kubenswrapper[4763]: I0131 15:10:24.291868 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f" event={"ID":"6fb80892-b089-4dff-baa8-44ffdf6b9b84","Type":"ContainerDied","Data":"a988c20b2fc824dfa2187fb0c43dcc497e6078364f55152dc25f4fb7f4a11a13"} Jan 31 15:10:24 crc kubenswrapper[4763]: I0131 15:10:24.291940 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f" event={"ID":"6fb80892-b089-4dff-baa8-44ffdf6b9b84","Type":"ContainerStarted","Data":"c1f5aa5759e18f6ecd4208338f7fca67fd1ed594e63c10e05f827e9d6840bf6d"} Jan 31 15:10:26 crc kubenswrapper[4763]: I0131 15:10:26.306832 4763 generic.go:334] "Generic (PLEG): container finished" podID="6fb80892-b089-4dff-baa8-44ffdf6b9b84" containerID="b117fd5ccecec46af5a92a0f70fa14891623c7f571bb856a356307bbb4cbe941" exitCode=0 Jan 31 15:10:26 crc kubenswrapper[4763]: I0131 15:10:26.306866 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f" event={"ID":"6fb80892-b089-4dff-baa8-44ffdf6b9b84","Type":"ContainerDied","Data":"b117fd5ccecec46af5a92a0f70fa14891623c7f571bb856a356307bbb4cbe941"} Jan 31 15:10:27 crc kubenswrapper[4763]: I0131 15:10:27.320112 4763 generic.go:334] "Generic (PLEG): container finished" podID="6fb80892-b089-4dff-baa8-44ffdf6b9b84" containerID="61146e2d5ceacdad075aedad451af4039443c708b570e2c5cab365b417714e6c" exitCode=0 Jan 31 15:10:27 crc kubenswrapper[4763]: I0131 15:10:27.320182 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f" event={"ID":"6fb80892-b089-4dff-baa8-44ffdf6b9b84","Type":"ContainerDied","Data":"61146e2d5ceacdad075aedad451af4039443c708b570e2c5cab365b417714e6c"} Jan 31 15:10:28 crc kubenswrapper[4763]: I0131 15:10:28.696531 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f" Jan 31 15:10:28 crc kubenswrapper[4763]: I0131 15:10:28.789762 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58z8p\" (UniqueName: \"kubernetes.io/projected/6fb80892-b089-4dff-baa8-44ffdf6b9b84-kube-api-access-58z8p\") pod \"6fb80892-b089-4dff-baa8-44ffdf6b9b84\" (UID: \"6fb80892-b089-4dff-baa8-44ffdf6b9b84\") " Jan 31 15:10:28 crc kubenswrapper[4763]: I0131 15:10:28.789900 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6fb80892-b089-4dff-baa8-44ffdf6b9b84-util\") pod \"6fb80892-b089-4dff-baa8-44ffdf6b9b84\" (UID: \"6fb80892-b089-4dff-baa8-44ffdf6b9b84\") " Jan 31 15:10:28 crc kubenswrapper[4763]: I0131 15:10:28.789934 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6fb80892-b089-4dff-baa8-44ffdf6b9b84-bundle\") pod \"6fb80892-b089-4dff-baa8-44ffdf6b9b84\" (UID: \"6fb80892-b089-4dff-baa8-44ffdf6b9b84\") " Jan 31 15:10:28 crc kubenswrapper[4763]: I0131 15:10:28.791130 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fb80892-b089-4dff-baa8-44ffdf6b9b84-bundle" (OuterVolumeSpecName: "bundle") pod "6fb80892-b089-4dff-baa8-44ffdf6b9b84" (UID: "6fb80892-b089-4dff-baa8-44ffdf6b9b84"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:10:28 crc kubenswrapper[4763]: I0131 15:10:28.796629 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fb80892-b089-4dff-baa8-44ffdf6b9b84-kube-api-access-58z8p" (OuterVolumeSpecName: "kube-api-access-58z8p") pod "6fb80892-b089-4dff-baa8-44ffdf6b9b84" (UID: "6fb80892-b089-4dff-baa8-44ffdf6b9b84"). InnerVolumeSpecName "kube-api-access-58z8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:10:28 crc kubenswrapper[4763]: I0131 15:10:28.821541 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fb80892-b089-4dff-baa8-44ffdf6b9b84-util" (OuterVolumeSpecName: "util") pod "6fb80892-b089-4dff-baa8-44ffdf6b9b84" (UID: "6fb80892-b089-4dff-baa8-44ffdf6b9b84"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:10:28 crc kubenswrapper[4763]: I0131 15:10:28.891360 4763 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6fb80892-b089-4dff-baa8-44ffdf6b9b84-util\") on node \"crc\" DevicePath \"\"" Jan 31 15:10:28 crc kubenswrapper[4763]: I0131 15:10:28.891418 4763 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6fb80892-b089-4dff-baa8-44ffdf6b9b84-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:10:28 crc kubenswrapper[4763]: I0131 15:10:28.891428 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58z8p\" (UniqueName: \"kubernetes.io/projected/6fb80892-b089-4dff-baa8-44ffdf6b9b84-kube-api-access-58z8p\") on node \"crc\" DevicePath \"\"" Jan 31 15:10:29 crc kubenswrapper[4763]: I0131 15:10:29.338115 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f" event={"ID":"6fb80892-b089-4dff-baa8-44ffdf6b9b84","Type":"ContainerDied","Data":"c1f5aa5759e18f6ecd4208338f7fca67fd1ed594e63c10e05f827e9d6840bf6d"} Jan 31 15:10:29 crc kubenswrapper[4763]: I0131 15:10:29.338169 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1f5aa5759e18f6ecd4208338f7fca67fd1ed594e63c10e05f827e9d6840bf6d" Jan 31 15:10:29 crc kubenswrapper[4763]: I0131 15:10:29.338210 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f" Jan 31 15:10:40 crc kubenswrapper[4763]: I0131 15:10:40.383134 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7598465c56-xt6m7"] Jan 31 15:10:40 crc kubenswrapper[4763]: E0131 15:10:40.383638 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fb80892-b089-4dff-baa8-44ffdf6b9b84" containerName="util" Jan 31 15:10:40 crc kubenswrapper[4763]: I0131 15:10:40.383652 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fb80892-b089-4dff-baa8-44ffdf6b9b84" containerName="util" Jan 31 15:10:40 crc kubenswrapper[4763]: E0131 15:10:40.383675 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fb80892-b089-4dff-baa8-44ffdf6b9b84" containerName="extract" Jan 31 15:10:40 crc kubenswrapper[4763]: I0131 15:10:40.383683 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fb80892-b089-4dff-baa8-44ffdf6b9b84" containerName="extract" Jan 31 15:10:40 crc kubenswrapper[4763]: E0131 15:10:40.383710 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fb80892-b089-4dff-baa8-44ffdf6b9b84" containerName="pull" Jan 31 15:10:40 crc kubenswrapper[4763]: I0131 15:10:40.383716 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fb80892-b089-4dff-baa8-44ffdf6b9b84" containerName="pull" Jan 31 15:10:40 crc kubenswrapper[4763]: I0131 15:10:40.383848 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fb80892-b089-4dff-baa8-44ffdf6b9b84" containerName="extract" Jan 31 15:10:40 crc kubenswrapper[4763]: I0131 15:10:40.384460 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7598465c56-xt6m7" Jan 31 15:10:40 crc kubenswrapper[4763]: I0131 15:10:40.387035 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-m2hdc" Jan 31 15:10:40 crc kubenswrapper[4763]: I0131 15:10:40.392043 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-service-cert" Jan 31 15:10:40 crc kubenswrapper[4763]: I0131 15:10:40.394608 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7598465c56-xt6m7"] Jan 31 15:10:40 crc kubenswrapper[4763]: I0131 15:10:40.549987 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z4nz\" (UniqueName: \"kubernetes.io/projected/970b855e-e278-4e6b-b9ba-733f8f798f59-kube-api-access-6z4nz\") pod \"keystone-operator-controller-manager-7598465c56-xt6m7\" (UID: \"970b855e-e278-4e6b-b9ba-733f8f798f59\") " pod="openstack-operators/keystone-operator-controller-manager-7598465c56-xt6m7" Jan 31 15:10:40 crc kubenswrapper[4763]: I0131 15:10:40.550052 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/970b855e-e278-4e6b-b9ba-733f8f798f59-webhook-cert\") pod \"keystone-operator-controller-manager-7598465c56-xt6m7\" (UID: \"970b855e-e278-4e6b-b9ba-733f8f798f59\") " pod="openstack-operators/keystone-operator-controller-manager-7598465c56-xt6m7" Jan 31 15:10:40 crc kubenswrapper[4763]: I0131 15:10:40.550335 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/970b855e-e278-4e6b-b9ba-733f8f798f59-apiservice-cert\") pod \"keystone-operator-controller-manager-7598465c56-xt6m7\" (UID: \"970b855e-e278-4e6b-b9ba-733f8f798f59\") " pod="openstack-operators/keystone-operator-controller-manager-7598465c56-xt6m7" Jan 31 15:10:40 crc kubenswrapper[4763]: I0131 15:10:40.651568 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z4nz\" (UniqueName: \"kubernetes.io/projected/970b855e-e278-4e6b-b9ba-733f8f798f59-kube-api-access-6z4nz\") pod \"keystone-operator-controller-manager-7598465c56-xt6m7\" (UID: \"970b855e-e278-4e6b-b9ba-733f8f798f59\") " pod="openstack-operators/keystone-operator-controller-manager-7598465c56-xt6m7" Jan 31 15:10:40 crc kubenswrapper[4763]: I0131 15:10:40.651631 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/970b855e-e278-4e6b-b9ba-733f8f798f59-webhook-cert\") pod \"keystone-operator-controller-manager-7598465c56-xt6m7\" (UID: \"970b855e-e278-4e6b-b9ba-733f8f798f59\") " pod="openstack-operators/keystone-operator-controller-manager-7598465c56-xt6m7" Jan 31 15:10:40 crc kubenswrapper[4763]: I0131 15:10:40.651789 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/970b855e-e278-4e6b-b9ba-733f8f798f59-apiservice-cert\") pod \"keystone-operator-controller-manager-7598465c56-xt6m7\" (UID: \"970b855e-e278-4e6b-b9ba-733f8f798f59\") " pod="openstack-operators/keystone-operator-controller-manager-7598465c56-xt6m7" Jan 31 15:10:40 crc kubenswrapper[4763]: I0131 15:10:40.658611 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/970b855e-e278-4e6b-b9ba-733f8f798f59-apiservice-cert\") pod \"keystone-operator-controller-manager-7598465c56-xt6m7\" (UID: \"970b855e-e278-4e6b-b9ba-733f8f798f59\") " pod="openstack-operators/keystone-operator-controller-manager-7598465c56-xt6m7" Jan 31 15:10:40 crc kubenswrapper[4763]: I0131 15:10:40.660197 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/970b855e-e278-4e6b-b9ba-733f8f798f59-webhook-cert\") pod \"keystone-operator-controller-manager-7598465c56-xt6m7\" (UID: \"970b855e-e278-4e6b-b9ba-733f8f798f59\") " pod="openstack-operators/keystone-operator-controller-manager-7598465c56-xt6m7" Jan 31 15:10:40 crc kubenswrapper[4763]: I0131 15:10:40.677512 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z4nz\" (UniqueName: \"kubernetes.io/projected/970b855e-e278-4e6b-b9ba-733f8f798f59-kube-api-access-6z4nz\") pod \"keystone-operator-controller-manager-7598465c56-xt6m7\" (UID: \"970b855e-e278-4e6b-b9ba-733f8f798f59\") " pod="openstack-operators/keystone-operator-controller-manager-7598465c56-xt6m7" Jan 31 15:10:40 crc kubenswrapper[4763]: I0131 15:10:40.700607 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7598465c56-xt6m7" Jan 31 15:10:41 crc kubenswrapper[4763]: I0131 15:10:41.211414 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7598465c56-xt6m7"] Jan 31 15:10:41 crc kubenswrapper[4763]: W0131 15:10:41.240320 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod970b855e_e278_4e6b_b9ba_733f8f798f59.slice/crio-93fdfe6ee228b8c9acf7872bab934f86017e6a84f9785022dd9b65f97c76da74 WatchSource:0}: Error finding container 93fdfe6ee228b8c9acf7872bab934f86017e6a84f9785022dd9b65f97c76da74: Status 404 returned error can't find the container with id 93fdfe6ee228b8c9acf7872bab934f86017e6a84f9785022dd9b65f97c76da74 Jan 31 15:10:41 crc kubenswrapper[4763]: I0131 15:10:41.423663 4763 generic.go:334] "Generic (PLEG): container finished" podID="dee0d43f-8ff0-4094-9833-92cda38ee182" containerID="6811b875a2287d4113a9be20ece01fcce4b87486deb0296107994f35e34ccfcc" exitCode=0 Jan 31 15:10:41 crc kubenswrapper[4763]: I0131 15:10:41.423787 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/rabbitmq-server-0" event={"ID":"dee0d43f-8ff0-4094-9833-92cda38ee182","Type":"ContainerDied","Data":"6811b875a2287d4113a9be20ece01fcce4b87486deb0296107994f35e34ccfcc"} Jan 31 15:10:41 crc kubenswrapper[4763]: I0131 15:10:41.425531 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7598465c56-xt6m7" event={"ID":"970b855e-e278-4e6b-b9ba-733f8f798f59","Type":"ContainerStarted","Data":"93fdfe6ee228b8c9acf7872bab934f86017e6a84f9785022dd9b65f97c76da74"} Jan 31 15:10:42 crc kubenswrapper[4763]: I0131 15:10:42.488389 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/rabbitmq-server-0" event={"ID":"dee0d43f-8ff0-4094-9833-92cda38ee182","Type":"ContainerStarted","Data":"d8cfe849fe563d5150ded6223866da779040c7b0ad70c45e5be36992721a721c"} Jan 31 15:10:42 crc kubenswrapper[4763]: I0131 15:10:42.488626 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:42 crc kubenswrapper[4763]: I0131 15:10:42.564723 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/rabbitmq-server-0" podStartSLOduration=36.475642595 podStartE2EDuration="42.564675132s" podCreationTimestamp="2026-01-31 15:10:00 +0000 UTC" firstStartedPulling="2026-01-31 15:10:02.402833298 +0000 UTC m=+922.157571601" lastFinishedPulling="2026-01-31 15:10:08.491865825 +0000 UTC m=+928.246604138" observedRunningTime="2026-01-31 15:10:42.563160182 +0000 UTC m=+962.317898475" watchObservedRunningTime="2026-01-31 15:10:42.564675132 +0000 UTC m=+962.319413435" Jan 31 15:10:44 crc kubenswrapper[4763]: I0131 15:10:44.177189 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:10:44 crc kubenswrapper[4763]: I0131 15:10:44.177655 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:10:44 crc kubenswrapper[4763]: I0131 15:10:44.188715 4763 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 15:10:44 crc kubenswrapper[4763]: I0131 15:10:44.189260 4763 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b85e19e54b5edf80a14f0fece0b4788a8867e6919fb49643606ef0f2d6f8bd3e"} pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 15:10:44 crc kubenswrapper[4763]: I0131 15:10:44.189314 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" containerID="cri-o://b85e19e54b5edf80a14f0fece0b4788a8867e6919fb49643606ef0f2d6f8bd3e" gracePeriod=600 Jan 31 15:10:44 crc kubenswrapper[4763]: I0131 15:10:44.505269 4763 generic.go:334] "Generic (PLEG): container finished" podID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerID="b85e19e54b5edf80a14f0fece0b4788a8867e6919fb49643606ef0f2d6f8bd3e" exitCode=0 Jan 31 15:10:44 crc kubenswrapper[4763]: I0131 15:10:44.505306 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerDied","Data":"b85e19e54b5edf80a14f0fece0b4788a8867e6919fb49643606ef0f2d6f8bd3e"} Jan 31 15:10:44 crc kubenswrapper[4763]: I0131 15:10:44.505337 4763 scope.go:117] "RemoveContainer" containerID="b6da57a4479d9d9e6a720f55c269fed96eb09e97fe8b846af0fb3bb3cfb46085" Jan 31 15:10:45 crc kubenswrapper[4763]: I0131 15:10:45.512463 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerStarted","Data":"f20a9db2f936557dee18355e3894646df84495361a6df22f393e9e76f8aebb8e"} Jan 31 15:10:45 crc kubenswrapper[4763]: I0131 15:10:45.514873 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7598465c56-xt6m7" event={"ID":"970b855e-e278-4e6b-b9ba-733f8f798f59","Type":"ContainerStarted","Data":"b0efb0ccd0af22ccdb931385cfe59c4525e9c4b4102052e89268cfbb08e6d64c"} Jan 31 15:10:45 crc kubenswrapper[4763]: I0131 15:10:45.514993 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7598465c56-xt6m7" Jan 31 15:10:45 crc kubenswrapper[4763]: I0131 15:10:45.553831 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7598465c56-xt6m7" podStartSLOduration=2.439723111 podStartE2EDuration="5.553811528s" podCreationTimestamp="2026-01-31 15:10:40 +0000 UTC" firstStartedPulling="2026-01-31 15:10:41.244256143 +0000 UTC m=+960.998994476" lastFinishedPulling="2026-01-31 15:10:44.3583446 +0000 UTC m=+964.113082893" observedRunningTime="2026-01-31 15:10:45.549832722 +0000 UTC m=+965.304571055" watchObservedRunningTime="2026-01-31 15:10:45.553811528 +0000 UTC m=+965.308549821" Jan 31 15:10:50 crc kubenswrapper[4763]: I0131 15:10:50.707901 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7598465c56-xt6m7" Jan 31 15:10:51 crc kubenswrapper[4763]: I0131 15:10:51.925884 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.232246 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone-38d2-account-create-update-cpmns"] Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.233414 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-38d2-account-create-update-cpmns" Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.235353 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-db-secret" Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.240162 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone-db-create-xn2nh"] Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.241075 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-create-xn2nh" Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.248493 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-38d2-account-create-update-cpmns"] Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.261068 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-db-create-xn2nh"] Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.364306 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f97vz\" (UniqueName: \"kubernetes.io/projected/4a1a2199-bf73-476a-8a6b-c50b1c26aa6c-kube-api-access-f97vz\") pod \"keystone-38d2-account-create-update-cpmns\" (UID: \"4a1a2199-bf73-476a-8a6b-c50b1c26aa6c\") " pod="swift-kuttl-tests/keystone-38d2-account-create-update-cpmns" Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.364400 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1994b227-dbc6-494a-886d-4573eee02640-operator-scripts\") pod \"keystone-db-create-xn2nh\" (UID: \"1994b227-dbc6-494a-886d-4573eee02640\") " pod="swift-kuttl-tests/keystone-db-create-xn2nh" Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.364462 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a1a2199-bf73-476a-8a6b-c50b1c26aa6c-operator-scripts\") pod \"keystone-38d2-account-create-update-cpmns\" (UID: \"4a1a2199-bf73-476a-8a6b-c50b1c26aa6c\") " pod="swift-kuttl-tests/keystone-38d2-account-create-update-cpmns" Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.364614 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2g6d\" (UniqueName: \"kubernetes.io/projected/1994b227-dbc6-494a-886d-4573eee02640-kube-api-access-m2g6d\") pod \"keystone-db-create-xn2nh\" (UID: \"1994b227-dbc6-494a-886d-4573eee02640\") " pod="swift-kuttl-tests/keystone-db-create-xn2nh" Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.465713 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1994b227-dbc6-494a-886d-4573eee02640-operator-scripts\") pod \"keystone-db-create-xn2nh\" (UID: \"1994b227-dbc6-494a-886d-4573eee02640\") " pod="swift-kuttl-tests/keystone-db-create-xn2nh" Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.465789 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a1a2199-bf73-476a-8a6b-c50b1c26aa6c-operator-scripts\") pod \"keystone-38d2-account-create-update-cpmns\" (UID: \"4a1a2199-bf73-476a-8a6b-c50b1c26aa6c\") " pod="swift-kuttl-tests/keystone-38d2-account-create-update-cpmns" Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.465853 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2g6d\" (UniqueName: \"kubernetes.io/projected/1994b227-dbc6-494a-886d-4573eee02640-kube-api-access-m2g6d\") pod \"keystone-db-create-xn2nh\" (UID: \"1994b227-dbc6-494a-886d-4573eee02640\") " pod="swift-kuttl-tests/keystone-db-create-xn2nh" Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.465891 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f97vz\" (UniqueName: \"kubernetes.io/projected/4a1a2199-bf73-476a-8a6b-c50b1c26aa6c-kube-api-access-f97vz\") pod \"keystone-38d2-account-create-update-cpmns\" (UID: \"4a1a2199-bf73-476a-8a6b-c50b1c26aa6c\") " pod="swift-kuttl-tests/keystone-38d2-account-create-update-cpmns" Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.466461 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1994b227-dbc6-494a-886d-4573eee02640-operator-scripts\") pod \"keystone-db-create-xn2nh\" (UID: \"1994b227-dbc6-494a-886d-4573eee02640\") " pod="swift-kuttl-tests/keystone-db-create-xn2nh" Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.466620 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a1a2199-bf73-476a-8a6b-c50b1c26aa6c-operator-scripts\") pod \"keystone-38d2-account-create-update-cpmns\" (UID: \"4a1a2199-bf73-476a-8a6b-c50b1c26aa6c\") " pod="swift-kuttl-tests/keystone-38d2-account-create-update-cpmns" Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.489728 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2g6d\" (UniqueName: \"kubernetes.io/projected/1994b227-dbc6-494a-886d-4573eee02640-kube-api-access-m2g6d\") pod \"keystone-db-create-xn2nh\" (UID: \"1994b227-dbc6-494a-886d-4573eee02640\") " pod="swift-kuttl-tests/keystone-db-create-xn2nh" Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.490542 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f97vz\" (UniqueName: \"kubernetes.io/projected/4a1a2199-bf73-476a-8a6b-c50b1c26aa6c-kube-api-access-f97vz\") pod \"keystone-38d2-account-create-update-cpmns\" (UID: \"4a1a2199-bf73-476a-8a6b-c50b1c26aa6c\") " pod="swift-kuttl-tests/keystone-38d2-account-create-update-cpmns" Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.595897 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-38d2-account-create-update-cpmns" Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.606995 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-create-xn2nh" Jan 31 15:10:57 crc kubenswrapper[4763]: I0131 15:10:57.026129 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-38d2-account-create-update-cpmns"] Jan 31 15:10:57 crc kubenswrapper[4763]: I0131 15:10:57.096465 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-db-create-xn2nh"] Jan 31 15:10:57 crc kubenswrapper[4763]: I0131 15:10:57.603722 4763 generic.go:334] "Generic (PLEG): container finished" podID="4a1a2199-bf73-476a-8a6b-c50b1c26aa6c" containerID="6b27f13fa86685c4b37caba09090beecec2d3e1290d084b6ae1cf269665b318e" exitCode=0 Jan 31 15:10:57 crc kubenswrapper[4763]: I0131 15:10:57.604080 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-38d2-account-create-update-cpmns" event={"ID":"4a1a2199-bf73-476a-8a6b-c50b1c26aa6c","Type":"ContainerDied","Data":"6b27f13fa86685c4b37caba09090beecec2d3e1290d084b6ae1cf269665b318e"} Jan 31 15:10:57 crc kubenswrapper[4763]: I0131 15:10:57.604115 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-38d2-account-create-update-cpmns" event={"ID":"4a1a2199-bf73-476a-8a6b-c50b1c26aa6c","Type":"ContainerStarted","Data":"eee91691610658059d63311404e674f57b1539ec80ce3317c767f9c0bed213fc"} Jan 31 15:10:57 crc kubenswrapper[4763]: I0131 15:10:57.605299 4763 generic.go:334] "Generic (PLEG): container finished" podID="1994b227-dbc6-494a-886d-4573eee02640" containerID="22a73cc01e38d3368c2378a8856a884268bdaab9f5443ff62ac66e26d223ed89" exitCode=0 Jan 31 15:10:57 crc kubenswrapper[4763]: I0131 15:10:57.605327 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-create-xn2nh" event={"ID":"1994b227-dbc6-494a-886d-4573eee02640","Type":"ContainerDied","Data":"22a73cc01e38d3368c2378a8856a884268bdaab9f5443ff62ac66e26d223ed89"} Jan 31 15:10:57 crc kubenswrapper[4763]: I0131 15:10:57.605341 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-create-xn2nh" event={"ID":"1994b227-dbc6-494a-886d-4573eee02640","Type":"ContainerStarted","Data":"159103466e2db66e884cd684ba7173a176780e2f7a9849d363b643f121180ccd"} Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.045195 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-create-xn2nh" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.050997 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-38d2-account-create-update-cpmns" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.101098 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f97vz\" (UniqueName: \"kubernetes.io/projected/4a1a2199-bf73-476a-8a6b-c50b1c26aa6c-kube-api-access-f97vz\") pod \"4a1a2199-bf73-476a-8a6b-c50b1c26aa6c\" (UID: \"4a1a2199-bf73-476a-8a6b-c50b1c26aa6c\") " Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.101281 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a1a2199-bf73-476a-8a6b-c50b1c26aa6c-operator-scripts\") pod \"4a1a2199-bf73-476a-8a6b-c50b1c26aa6c\" (UID: \"4a1a2199-bf73-476a-8a6b-c50b1c26aa6c\") " Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.101346 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1994b227-dbc6-494a-886d-4573eee02640-operator-scripts\") pod \"1994b227-dbc6-494a-886d-4573eee02640\" (UID: \"1994b227-dbc6-494a-886d-4573eee02640\") " Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.101401 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2g6d\" (UniqueName: \"kubernetes.io/projected/1994b227-dbc6-494a-886d-4573eee02640-kube-api-access-m2g6d\") pod \"1994b227-dbc6-494a-886d-4573eee02640\" (UID: \"1994b227-dbc6-494a-886d-4573eee02640\") " Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.101907 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a1a2199-bf73-476a-8a6b-c50b1c26aa6c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4a1a2199-bf73-476a-8a6b-c50b1c26aa6c" (UID: "4a1a2199-bf73-476a-8a6b-c50b1c26aa6c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.101914 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1994b227-dbc6-494a-886d-4573eee02640-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1994b227-dbc6-494a-886d-4573eee02640" (UID: "1994b227-dbc6-494a-886d-4573eee02640"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.101985 4763 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a1a2199-bf73-476a-8a6b-c50b1c26aa6c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.102005 4763 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1994b227-dbc6-494a-886d-4573eee02640-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.109170 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1994b227-dbc6-494a-886d-4573eee02640-kube-api-access-m2g6d" (OuterVolumeSpecName: "kube-api-access-m2g6d") pod "1994b227-dbc6-494a-886d-4573eee02640" (UID: "1994b227-dbc6-494a-886d-4573eee02640"). InnerVolumeSpecName "kube-api-access-m2g6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.115021 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a1a2199-bf73-476a-8a6b-c50b1c26aa6c-kube-api-access-f97vz" (OuterVolumeSpecName: "kube-api-access-f97vz") pod "4a1a2199-bf73-476a-8a6b-c50b1c26aa6c" (UID: "4a1a2199-bf73-476a-8a6b-c50b1c26aa6c"). InnerVolumeSpecName "kube-api-access-f97vz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.140667 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-index-2w984"] Jan 31 15:10:59 crc kubenswrapper[4763]: E0131 15:10:59.140984 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a1a2199-bf73-476a-8a6b-c50b1c26aa6c" containerName="mariadb-account-create-update" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.141004 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a1a2199-bf73-476a-8a6b-c50b1c26aa6c" containerName="mariadb-account-create-update" Jan 31 15:10:59 crc kubenswrapper[4763]: E0131 15:10:59.141032 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1994b227-dbc6-494a-886d-4573eee02640" containerName="mariadb-database-create" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.141038 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="1994b227-dbc6-494a-886d-4573eee02640" containerName="mariadb-database-create" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.141134 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="1994b227-dbc6-494a-886d-4573eee02640" containerName="mariadb-database-create" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.141152 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a1a2199-bf73-476a-8a6b-c50b1c26aa6c" containerName="mariadb-account-create-update" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.141572 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-index-2w984" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.144296 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-index-dockercfg-5rn7l" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.148409 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-index-2w984"] Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.204539 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wrxw\" (UniqueName: \"kubernetes.io/projected/ef84b681-2ea6-4684-84c0-6d452a5b47df-kube-api-access-5wrxw\") pod \"barbican-operator-index-2w984\" (UID: \"ef84b681-2ea6-4684-84c0-6d452a5b47df\") " pod="openstack-operators/barbican-operator-index-2w984" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.204621 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f97vz\" (UniqueName: \"kubernetes.io/projected/4a1a2199-bf73-476a-8a6b-c50b1c26aa6c-kube-api-access-f97vz\") on node \"crc\" DevicePath \"\"" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.204639 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2g6d\" (UniqueName: \"kubernetes.io/projected/1994b227-dbc6-494a-886d-4573eee02640-kube-api-access-m2g6d\") on node \"crc\" DevicePath \"\"" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.306085 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wrxw\" (UniqueName: \"kubernetes.io/projected/ef84b681-2ea6-4684-84c0-6d452a5b47df-kube-api-access-5wrxw\") pod \"barbican-operator-index-2w984\" (UID: \"ef84b681-2ea6-4684-84c0-6d452a5b47df\") " pod="openstack-operators/barbican-operator-index-2w984" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.337473 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wrxw\" (UniqueName: \"kubernetes.io/projected/ef84b681-2ea6-4684-84c0-6d452a5b47df-kube-api-access-5wrxw\") pod \"barbican-operator-index-2w984\" (UID: \"ef84b681-2ea6-4684-84c0-6d452a5b47df\") " pod="openstack-operators/barbican-operator-index-2w984" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.465428 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-index-2w984" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.631832 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-38d2-account-create-update-cpmns" event={"ID":"4a1a2199-bf73-476a-8a6b-c50b1c26aa6c","Type":"ContainerDied","Data":"eee91691610658059d63311404e674f57b1539ec80ce3317c767f9c0bed213fc"} Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.632070 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eee91691610658059d63311404e674f57b1539ec80ce3317c767f9c0bed213fc" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.632136 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-38d2-account-create-update-cpmns" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.641618 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-create-xn2nh" event={"ID":"1994b227-dbc6-494a-886d-4573eee02640","Type":"ContainerDied","Data":"159103466e2db66e884cd684ba7173a176780e2f7a9849d363b643f121180ccd"} Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.641656 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="159103466e2db66e884cd684ba7173a176780e2f7a9849d363b643f121180ccd" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.641764 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-create-xn2nh" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.896355 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-index-2w984"] Jan 31 15:10:59 crc kubenswrapper[4763]: W0131 15:10:59.903174 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef84b681_2ea6_4684_84c0_6d452a5b47df.slice/crio-607bbf8ff5faec621683cf864459e22b972f3e71cd8ec83e52d3481a3dda7762 WatchSource:0}: Error finding container 607bbf8ff5faec621683cf864459e22b972f3e71cd8ec83e52d3481a3dda7762: Status 404 returned error can't find the container with id 607bbf8ff5faec621683cf864459e22b972f3e71cd8ec83e52d3481a3dda7762 Jan 31 15:11:00 crc kubenswrapper[4763]: I0131 15:11:00.651785 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-index-2w984" event={"ID":"ef84b681-2ea6-4684-84c0-6d452a5b47df","Type":"ContainerStarted","Data":"607bbf8ff5faec621683cf864459e22b972f3e71cd8ec83e52d3481a3dda7762"} Jan 31 15:11:01 crc kubenswrapper[4763]: I0131 15:11:01.659407 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-index-2w984" event={"ID":"ef84b681-2ea6-4684-84c0-6d452a5b47df","Type":"ContainerStarted","Data":"ba02d9d5067b6cfc2c84bd386cf6d1363bb1b4a505c72bd10a8bd2b21693417d"} Jan 31 15:11:01 crc kubenswrapper[4763]: I0131 15:11:01.683484 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-index-2w984" podStartSLOduration=1.844176913 podStartE2EDuration="2.683462546s" podCreationTimestamp="2026-01-31 15:10:59 +0000 UTC" firstStartedPulling="2026-01-31 15:10:59.905291718 +0000 UTC m=+979.660030011" lastFinishedPulling="2026-01-31 15:11:00.744577341 +0000 UTC m=+980.499315644" observedRunningTime="2026-01-31 15:11:01.683310962 +0000 UTC m=+981.438049265" watchObservedRunningTime="2026-01-31 15:11:01.683462546 +0000 UTC m=+981.438200839" Jan 31 15:11:01 crc kubenswrapper[4763]: I0131 15:11:01.799777 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone-db-sync-cfz59"] Jan 31 15:11:01 crc kubenswrapper[4763]: I0131 15:11:01.800481 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-sync-cfz59" Jan 31 15:11:01 crc kubenswrapper[4763]: I0131 15:11:01.801903 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-config-data" Jan 31 15:11:01 crc kubenswrapper[4763]: I0131 15:11:01.802641 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-keystone-dockercfg-skmvm" Jan 31 15:11:01 crc kubenswrapper[4763]: I0131 15:11:01.802785 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-scripts" Jan 31 15:11:01 crc kubenswrapper[4763]: I0131 15:11:01.805058 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone" Jan 31 15:11:01 crc kubenswrapper[4763]: I0131 15:11:01.811938 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-db-sync-cfz59"] Jan 31 15:11:01 crc kubenswrapper[4763]: I0131 15:11:01.846434 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/576fbbd2-e600-40a9-95f4-2772c96807f1-config-data\") pod \"keystone-db-sync-cfz59\" (UID: \"576fbbd2-e600-40a9-95f4-2772c96807f1\") " pod="swift-kuttl-tests/keystone-db-sync-cfz59" Jan 31 15:11:01 crc kubenswrapper[4763]: I0131 15:11:01.846784 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxl58\" (UniqueName: \"kubernetes.io/projected/576fbbd2-e600-40a9-95f4-2772c96807f1-kube-api-access-pxl58\") pod \"keystone-db-sync-cfz59\" (UID: \"576fbbd2-e600-40a9-95f4-2772c96807f1\") " pod="swift-kuttl-tests/keystone-db-sync-cfz59" Jan 31 15:11:01 crc kubenswrapper[4763]: I0131 15:11:01.947760 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/576fbbd2-e600-40a9-95f4-2772c96807f1-config-data\") pod \"keystone-db-sync-cfz59\" (UID: \"576fbbd2-e600-40a9-95f4-2772c96807f1\") " pod="swift-kuttl-tests/keystone-db-sync-cfz59" Jan 31 15:11:01 crc kubenswrapper[4763]: I0131 15:11:01.947845 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxl58\" (UniqueName: \"kubernetes.io/projected/576fbbd2-e600-40a9-95f4-2772c96807f1-kube-api-access-pxl58\") pod \"keystone-db-sync-cfz59\" (UID: \"576fbbd2-e600-40a9-95f4-2772c96807f1\") " pod="swift-kuttl-tests/keystone-db-sync-cfz59" Jan 31 15:11:01 crc kubenswrapper[4763]: I0131 15:11:01.953557 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/576fbbd2-e600-40a9-95f4-2772c96807f1-config-data\") pod \"keystone-db-sync-cfz59\" (UID: \"576fbbd2-e600-40a9-95f4-2772c96807f1\") " pod="swift-kuttl-tests/keystone-db-sync-cfz59" Jan 31 15:11:01 crc kubenswrapper[4763]: I0131 15:11:01.963282 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxl58\" (UniqueName: \"kubernetes.io/projected/576fbbd2-e600-40a9-95f4-2772c96807f1-kube-api-access-pxl58\") pod \"keystone-db-sync-cfz59\" (UID: \"576fbbd2-e600-40a9-95f4-2772c96807f1\") " pod="swift-kuttl-tests/keystone-db-sync-cfz59" Jan 31 15:11:02 crc kubenswrapper[4763]: I0131 15:11:02.120452 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-sync-cfz59" Jan 31 15:11:02 crc kubenswrapper[4763]: I0131 15:11:02.599930 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-db-sync-cfz59"] Jan 31 15:11:02 crc kubenswrapper[4763]: W0131 15:11:02.603367 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod576fbbd2_e600_40a9_95f4_2772c96807f1.slice/crio-092f98dde1cbfd08cabc98509525e608a932af9160563cb66a514af33b160da5 WatchSource:0}: Error finding container 092f98dde1cbfd08cabc98509525e608a932af9160563cb66a514af33b160da5: Status 404 returned error can't find the container with id 092f98dde1cbfd08cabc98509525e608a932af9160563cb66a514af33b160da5 Jan 31 15:11:02 crc kubenswrapper[4763]: I0131 15:11:02.665435 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-sync-cfz59" event={"ID":"576fbbd2-e600-40a9-95f4-2772c96807f1","Type":"ContainerStarted","Data":"092f98dde1cbfd08cabc98509525e608a932af9160563cb66a514af33b160da5"} Jan 31 15:11:09 crc kubenswrapper[4763]: I0131 15:11:09.466540 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-index-2w984" Jan 31 15:11:09 crc kubenswrapper[4763]: I0131 15:11:09.468048 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/barbican-operator-index-2w984" Jan 31 15:11:09 crc kubenswrapper[4763]: I0131 15:11:09.507760 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/barbican-operator-index-2w984" Jan 31 15:11:09 crc kubenswrapper[4763]: I0131 15:11:09.768604 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-index-2w984" Jan 31 15:11:12 crc kubenswrapper[4763]: I0131 15:11:12.749427 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-sync-cfz59" event={"ID":"576fbbd2-e600-40a9-95f4-2772c96807f1","Type":"ContainerStarted","Data":"d4b2fbc8cb2358be37b442ce5253eaaf842807de68a087674a3ea1292f2dd38e"} Jan 31 15:11:12 crc kubenswrapper[4763]: I0131 15:11:12.765374 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/keystone-db-sync-cfz59" podStartSLOduration=2.077762499 podStartE2EDuration="11.765353588s" podCreationTimestamp="2026-01-31 15:11:01 +0000 UTC" firstStartedPulling="2026-01-31 15:11:02.605394844 +0000 UTC m=+982.360133137" lastFinishedPulling="2026-01-31 15:11:12.292985923 +0000 UTC m=+992.047724226" observedRunningTime="2026-01-31 15:11:12.763328935 +0000 UTC m=+992.518067248" watchObservedRunningTime="2026-01-31 15:11:12.765353588 +0000 UTC m=+992.520091921" Jan 31 15:11:15 crc kubenswrapper[4763]: I0131 15:11:15.772446 4763 generic.go:334] "Generic (PLEG): container finished" podID="576fbbd2-e600-40a9-95f4-2772c96807f1" containerID="d4b2fbc8cb2358be37b442ce5253eaaf842807de68a087674a3ea1292f2dd38e" exitCode=0 Jan 31 15:11:15 crc kubenswrapper[4763]: I0131 15:11:15.772555 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-sync-cfz59" event={"ID":"576fbbd2-e600-40a9-95f4-2772c96807f1","Type":"ContainerDied","Data":"d4b2fbc8cb2358be37b442ce5253eaaf842807de68a087674a3ea1292f2dd38e"} Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.064779 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-sync-cfz59" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.198511 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxl58\" (UniqueName: \"kubernetes.io/projected/576fbbd2-e600-40a9-95f4-2772c96807f1-kube-api-access-pxl58\") pod \"576fbbd2-e600-40a9-95f4-2772c96807f1\" (UID: \"576fbbd2-e600-40a9-95f4-2772c96807f1\") " Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.198687 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/576fbbd2-e600-40a9-95f4-2772c96807f1-config-data\") pod \"576fbbd2-e600-40a9-95f4-2772c96807f1\" (UID: \"576fbbd2-e600-40a9-95f4-2772c96807f1\") " Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.203642 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/576fbbd2-e600-40a9-95f4-2772c96807f1-kube-api-access-pxl58" (OuterVolumeSpecName: "kube-api-access-pxl58") pod "576fbbd2-e600-40a9-95f4-2772c96807f1" (UID: "576fbbd2-e600-40a9-95f4-2772c96807f1"). InnerVolumeSpecName "kube-api-access-pxl58". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.244947 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/576fbbd2-e600-40a9-95f4-2772c96807f1-config-data" (OuterVolumeSpecName: "config-data") pod "576fbbd2-e600-40a9-95f4-2772c96807f1" (UID: "576fbbd2-e600-40a9-95f4-2772c96807f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.300586 4763 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/576fbbd2-e600-40a9-95f4-2772c96807f1-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.300618 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxl58\" (UniqueName: \"kubernetes.io/projected/576fbbd2-e600-40a9-95f4-2772c96807f1-kube-api-access-pxl58\") on node \"crc\" DevicePath \"\"" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.775472 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck"] Jan 31 15:11:17 crc kubenswrapper[4763]: E0131 15:11:17.776155 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="576fbbd2-e600-40a9-95f4-2772c96807f1" containerName="keystone-db-sync" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.776194 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="576fbbd2-e600-40a9-95f4-2772c96807f1" containerName="keystone-db-sync" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.776615 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="576fbbd2-e600-40a9-95f4-2772c96807f1" containerName="keystone-db-sync" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.781465 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.797349 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-sync-cfz59" event={"ID":"576fbbd2-e600-40a9-95f4-2772c96807f1","Type":"ContainerDied","Data":"092f98dde1cbfd08cabc98509525e608a932af9160563cb66a514af33b160da5"} Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.797419 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="092f98dde1cbfd08cabc98509525e608a932af9160563cb66a514af33b160da5" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.797471 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-sync-cfz59" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.800517 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck"] Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.819681 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-rrv7w" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.909849 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz5fb\" (UniqueName: \"kubernetes.io/projected/82458dee-ae6f-46c9-ac1b-745146c8b9bf-kube-api-access-mz5fb\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck\" (UID: \"82458dee-ae6f-46c9-ac1b-745146c8b9bf\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.910089 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/82458dee-ae6f-46c9-ac1b-745146c8b9bf-bundle\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck\" (UID: \"82458dee-ae6f-46c9-ac1b-745146c8b9bf\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.910203 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/82458dee-ae6f-46c9-ac1b-745146c8b9bf-util\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck\" (UID: \"82458dee-ae6f-46c9-ac1b-745146c8b9bf\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.982689 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone-bootstrap-tjb5m"] Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.983962 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.989578 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-keystone-dockercfg-skmvm" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.990137 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-config-data" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.990943 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.991009 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"osp-secret" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.992411 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-scripts" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.004158 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-bootstrap-tjb5m"] Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.011569 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz5fb\" (UniqueName: \"kubernetes.io/projected/82458dee-ae6f-46c9-ac1b-745146c8b9bf-kube-api-access-mz5fb\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck\" (UID: \"82458dee-ae6f-46c9-ac1b-745146c8b9bf\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.011613 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/82458dee-ae6f-46c9-ac1b-745146c8b9bf-bundle\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck\" (UID: \"82458dee-ae6f-46c9-ac1b-745146c8b9bf\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.011640 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/82458dee-ae6f-46c9-ac1b-745146c8b9bf-util\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck\" (UID: \"82458dee-ae6f-46c9-ac1b-745146c8b9bf\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.012074 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/82458dee-ae6f-46c9-ac1b-745146c8b9bf-util\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck\" (UID: \"82458dee-ae6f-46c9-ac1b-745146c8b9bf\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.012307 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/82458dee-ae6f-46c9-ac1b-745146c8b9bf-bundle\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck\" (UID: \"82458dee-ae6f-46c9-ac1b-745146c8b9bf\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.034210 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz5fb\" (UniqueName: \"kubernetes.io/projected/82458dee-ae6f-46c9-ac1b-745146c8b9bf-kube-api-access-mz5fb\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck\" (UID: \"82458dee-ae6f-46c9-ac1b-745146c8b9bf\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.113559 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-scripts\") pod \"keystone-bootstrap-tjb5m\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.113752 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-credential-keys\") pod \"keystone-bootstrap-tjb5m\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.113935 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-fernet-keys\") pod \"keystone-bootstrap-tjb5m\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.114023 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-config-data\") pod \"keystone-bootstrap-tjb5m\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.114150 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkltv\" (UniqueName: \"kubernetes.io/projected/9b3e2d68-7406-4653-85ed-41746d3a6ea7-kube-api-access-xkltv\") pod \"keystone-bootstrap-tjb5m\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.132324 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.215940 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-config-data\") pod \"keystone-bootstrap-tjb5m\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.215998 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkltv\" (UniqueName: \"kubernetes.io/projected/9b3e2d68-7406-4653-85ed-41746d3a6ea7-kube-api-access-xkltv\") pod \"keystone-bootstrap-tjb5m\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.216031 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-scripts\") pod \"keystone-bootstrap-tjb5m\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.216073 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-credential-keys\") pod \"keystone-bootstrap-tjb5m\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.216109 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-fernet-keys\") pod \"keystone-bootstrap-tjb5m\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.220005 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-credential-keys\") pod \"keystone-bootstrap-tjb5m\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.220361 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-fernet-keys\") pod \"keystone-bootstrap-tjb5m\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.221358 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-scripts\") pod \"keystone-bootstrap-tjb5m\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.221987 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-config-data\") pod \"keystone-bootstrap-tjb5m\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.237307 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkltv\" (UniqueName: \"kubernetes.io/projected/9b3e2d68-7406-4653-85ed-41746d3a6ea7-kube-api-access-xkltv\") pod \"keystone-bootstrap-tjb5m\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.309295 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.638896 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck"] Jan 31 15:11:18 crc kubenswrapper[4763]: W0131 15:11:18.642429 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82458dee_ae6f_46c9_ac1b_745146c8b9bf.slice/crio-25c3f401ab83dd80391b3e9558c432539ccdd0588556306daaa7f4287ca7bd8b WatchSource:0}: Error finding container 25c3f401ab83dd80391b3e9558c432539ccdd0588556306daaa7f4287ca7bd8b: Status 404 returned error can't find the container with id 25c3f401ab83dd80391b3e9558c432539ccdd0588556306daaa7f4287ca7bd8b Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.728572 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-bootstrap-tjb5m"] Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.805049 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck" event={"ID":"82458dee-ae6f-46c9-ac1b-745146c8b9bf","Type":"ContainerStarted","Data":"25c3f401ab83dd80391b3e9558c432539ccdd0588556306daaa7f4287ca7bd8b"} Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.806618 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" event={"ID":"9b3e2d68-7406-4653-85ed-41746d3a6ea7","Type":"ContainerStarted","Data":"a96a11be8a6ffc55d8b5833c9e559c88fe1548a90837d48887b0742767b1be95"} Jan 31 15:11:19 crc kubenswrapper[4763]: I0131 15:11:19.818014 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" event={"ID":"9b3e2d68-7406-4653-85ed-41746d3a6ea7","Type":"ContainerStarted","Data":"411cd9ca106798ab981b147bd785e0f4defae6f019c51c98fdfff57480304b59"} Jan 31 15:11:19 crc kubenswrapper[4763]: I0131 15:11:19.821160 4763 generic.go:334] "Generic (PLEG): container finished" podID="82458dee-ae6f-46c9-ac1b-745146c8b9bf" containerID="c99bff41e355c4dcb44e21f9886148266110b133cde0aeb7743a27e9d27a9b88" exitCode=0 Jan 31 15:11:19 crc kubenswrapper[4763]: I0131 15:11:19.821204 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck" event={"ID":"82458dee-ae6f-46c9-ac1b-745146c8b9bf","Type":"ContainerDied","Data":"c99bff41e355c4dcb44e21f9886148266110b133cde0aeb7743a27e9d27a9b88"} Jan 31 15:11:19 crc kubenswrapper[4763]: I0131 15:11:19.846473 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" podStartSLOduration=2.846453356 podStartE2EDuration="2.846453356s" podCreationTimestamp="2026-01-31 15:11:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:11:19.84241793 +0000 UTC m=+999.597156253" watchObservedRunningTime="2026-01-31 15:11:19.846453356 +0000 UTC m=+999.601191659" Jan 31 15:11:21 crc kubenswrapper[4763]: I0131 15:11:21.837690 4763 generic.go:334] "Generic (PLEG): container finished" podID="82458dee-ae6f-46c9-ac1b-745146c8b9bf" containerID="a50eb69c245fbaf455b79ed7afb47369d0c54938bbb5415939e372cfafbbcbe4" exitCode=0 Jan 31 15:11:21 crc kubenswrapper[4763]: I0131 15:11:21.837815 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck" event={"ID":"82458dee-ae6f-46c9-ac1b-745146c8b9bf","Type":"ContainerDied","Data":"a50eb69c245fbaf455b79ed7afb47369d0c54938bbb5415939e372cfafbbcbe4"} Jan 31 15:11:22 crc kubenswrapper[4763]: I0131 15:11:22.847538 4763 generic.go:334] "Generic (PLEG): container finished" podID="9b3e2d68-7406-4653-85ed-41746d3a6ea7" containerID="411cd9ca106798ab981b147bd785e0f4defae6f019c51c98fdfff57480304b59" exitCode=0 Jan 31 15:11:22 crc kubenswrapper[4763]: I0131 15:11:22.847620 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" event={"ID":"9b3e2d68-7406-4653-85ed-41746d3a6ea7","Type":"ContainerDied","Data":"411cd9ca106798ab981b147bd785e0f4defae6f019c51c98fdfff57480304b59"} Jan 31 15:11:22 crc kubenswrapper[4763]: I0131 15:11:22.850772 4763 generic.go:334] "Generic (PLEG): container finished" podID="82458dee-ae6f-46c9-ac1b-745146c8b9bf" containerID="1d3c6f773d8fd7f2fd40aba8f0ca445f528da2d1b35ad1ac37b8569b4236bbf8" exitCode=0 Jan 31 15:11:22 crc kubenswrapper[4763]: I0131 15:11:22.850796 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck" event={"ID":"82458dee-ae6f-46c9-ac1b-745146c8b9bf","Type":"ContainerDied","Data":"1d3c6f773d8fd7f2fd40aba8f0ca445f528da2d1b35ad1ac37b8569b4236bbf8"} Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.275940 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.283544 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.406428 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-fernet-keys\") pod \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.406523 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/82458dee-ae6f-46c9-ac1b-745146c8b9bf-util\") pod \"82458dee-ae6f-46c9-ac1b-745146c8b9bf\" (UID: \"82458dee-ae6f-46c9-ac1b-745146c8b9bf\") " Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.406565 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-config-data\") pod \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.406616 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkltv\" (UniqueName: \"kubernetes.io/projected/9b3e2d68-7406-4653-85ed-41746d3a6ea7-kube-api-access-xkltv\") pod \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.406712 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mz5fb\" (UniqueName: \"kubernetes.io/projected/82458dee-ae6f-46c9-ac1b-745146c8b9bf-kube-api-access-mz5fb\") pod \"82458dee-ae6f-46c9-ac1b-745146c8b9bf\" (UID: \"82458dee-ae6f-46c9-ac1b-745146c8b9bf\") " Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.406768 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-credential-keys\") pod \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.406802 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/82458dee-ae6f-46c9-ac1b-745146c8b9bf-bundle\") pod \"82458dee-ae6f-46c9-ac1b-745146c8b9bf\" (UID: \"82458dee-ae6f-46c9-ac1b-745146c8b9bf\") " Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.406831 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-scripts\") pod \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.409165 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82458dee-ae6f-46c9-ac1b-745146c8b9bf-bundle" (OuterVolumeSpecName: "bundle") pod "82458dee-ae6f-46c9-ac1b-745146c8b9bf" (UID: "82458dee-ae6f-46c9-ac1b-745146c8b9bf"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.412242 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b3e2d68-7406-4653-85ed-41746d3a6ea7-kube-api-access-xkltv" (OuterVolumeSpecName: "kube-api-access-xkltv") pod "9b3e2d68-7406-4653-85ed-41746d3a6ea7" (UID: "9b3e2d68-7406-4653-85ed-41746d3a6ea7"). InnerVolumeSpecName "kube-api-access-xkltv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.412327 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "9b3e2d68-7406-4653-85ed-41746d3a6ea7" (UID: "9b3e2d68-7406-4653-85ed-41746d3a6ea7"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.412493 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-scripts" (OuterVolumeSpecName: "scripts") pod "9b3e2d68-7406-4653-85ed-41746d3a6ea7" (UID: "9b3e2d68-7406-4653-85ed-41746d3a6ea7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.413259 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9b3e2d68-7406-4653-85ed-41746d3a6ea7" (UID: "9b3e2d68-7406-4653-85ed-41746d3a6ea7"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.413954 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82458dee-ae6f-46c9-ac1b-745146c8b9bf-kube-api-access-mz5fb" (OuterVolumeSpecName: "kube-api-access-mz5fb") pod "82458dee-ae6f-46c9-ac1b-745146c8b9bf" (UID: "82458dee-ae6f-46c9-ac1b-745146c8b9bf"). InnerVolumeSpecName "kube-api-access-mz5fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.435443 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-config-data" (OuterVolumeSpecName: "config-data") pod "9b3e2d68-7406-4653-85ed-41746d3a6ea7" (UID: "9b3e2d68-7406-4653-85ed-41746d3a6ea7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.509195 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mz5fb\" (UniqueName: \"kubernetes.io/projected/82458dee-ae6f-46c9-ac1b-745146c8b9bf-kube-api-access-mz5fb\") on node \"crc\" DevicePath \"\"" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.509237 4763 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.509264 4763 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/82458dee-ae6f-46c9-ac1b-745146c8b9bf-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.509277 4763 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.509290 4763 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.509303 4763 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.509319 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkltv\" (UniqueName: \"kubernetes.io/projected/9b3e2d68-7406-4653-85ed-41746d3a6ea7-kube-api-access-xkltv\") on node \"crc\" DevicePath \"\"" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.804276 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82458dee-ae6f-46c9-ac1b-745146c8b9bf-util" (OuterVolumeSpecName: "util") pod "82458dee-ae6f-46c9-ac1b-745146c8b9bf" (UID: "82458dee-ae6f-46c9-ac1b-745146c8b9bf"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.814901 4763 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/82458dee-ae6f-46c9-ac1b-745146c8b9bf-util\") on node \"crc\" DevicePath \"\"" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.873349 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.874265 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" event={"ID":"9b3e2d68-7406-4653-85ed-41746d3a6ea7","Type":"ContainerDied","Data":"a96a11be8a6ffc55d8b5833c9e559c88fe1548a90837d48887b0742767b1be95"} Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.874358 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a96a11be8a6ffc55d8b5833c9e559c88fe1548a90837d48887b0742767b1be95" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.878335 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck" event={"ID":"82458dee-ae6f-46c9-ac1b-745146c8b9bf","Type":"ContainerDied","Data":"25c3f401ab83dd80391b3e9558c432539ccdd0588556306daaa7f4287ca7bd8b"} Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.878375 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25c3f401ab83dd80391b3e9558c432539ccdd0588556306daaa7f4287ca7bd8b" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.878418 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.998490 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone-7659668474-6698l"] Jan 31 15:11:24 crc kubenswrapper[4763]: E0131 15:11:24.998778 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82458dee-ae6f-46c9-ac1b-745146c8b9bf" containerName="extract" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.998789 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="82458dee-ae6f-46c9-ac1b-745146c8b9bf" containerName="extract" Jan 31 15:11:24 crc kubenswrapper[4763]: E0131 15:11:24.998807 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82458dee-ae6f-46c9-ac1b-745146c8b9bf" containerName="pull" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.998813 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="82458dee-ae6f-46c9-ac1b-745146c8b9bf" containerName="pull" Jan 31 15:11:24 crc kubenswrapper[4763]: E0131 15:11:24.998820 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82458dee-ae6f-46c9-ac1b-745146c8b9bf" containerName="util" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.998826 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="82458dee-ae6f-46c9-ac1b-745146c8b9bf" containerName="util" Jan 31 15:11:24 crc kubenswrapper[4763]: E0131 15:11:24.998835 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b3e2d68-7406-4653-85ed-41746d3a6ea7" containerName="keystone-bootstrap" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.998841 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b3e2d68-7406-4653-85ed-41746d3a6ea7" containerName="keystone-bootstrap" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.998954 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="82458dee-ae6f-46c9-ac1b-745146c8b9bf" containerName="extract" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.998974 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b3e2d68-7406-4653-85ed-41746d3a6ea7" containerName="keystone-bootstrap" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.999479 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-7659668474-6698l" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.003094 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-scripts" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.003417 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-config-data" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.003559 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.005708 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-keystone-dockercfg-skmvm" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.022582 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-7659668474-6698l"] Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.124877 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/791f5002-b2b5-488c-99c8-5ed511cffed2-scripts\") pod \"keystone-7659668474-6698l\" (UID: \"791f5002-b2b5-488c-99c8-5ed511cffed2\") " pod="swift-kuttl-tests/keystone-7659668474-6698l" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.124923 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/791f5002-b2b5-488c-99c8-5ed511cffed2-credential-keys\") pod \"keystone-7659668474-6698l\" (UID: \"791f5002-b2b5-488c-99c8-5ed511cffed2\") " pod="swift-kuttl-tests/keystone-7659668474-6698l" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.124951 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/791f5002-b2b5-488c-99c8-5ed511cffed2-config-data\") pod \"keystone-7659668474-6698l\" (UID: \"791f5002-b2b5-488c-99c8-5ed511cffed2\") " pod="swift-kuttl-tests/keystone-7659668474-6698l" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.124978 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htfnp\" (UniqueName: \"kubernetes.io/projected/791f5002-b2b5-488c-99c8-5ed511cffed2-kube-api-access-htfnp\") pod \"keystone-7659668474-6698l\" (UID: \"791f5002-b2b5-488c-99c8-5ed511cffed2\") " pod="swift-kuttl-tests/keystone-7659668474-6698l" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.125043 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/791f5002-b2b5-488c-99c8-5ed511cffed2-fernet-keys\") pod \"keystone-7659668474-6698l\" (UID: \"791f5002-b2b5-488c-99c8-5ed511cffed2\") " pod="swift-kuttl-tests/keystone-7659668474-6698l" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.226247 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/791f5002-b2b5-488c-99c8-5ed511cffed2-fernet-keys\") pod \"keystone-7659668474-6698l\" (UID: \"791f5002-b2b5-488c-99c8-5ed511cffed2\") " pod="swift-kuttl-tests/keystone-7659668474-6698l" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.226326 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/791f5002-b2b5-488c-99c8-5ed511cffed2-scripts\") pod \"keystone-7659668474-6698l\" (UID: \"791f5002-b2b5-488c-99c8-5ed511cffed2\") " pod="swift-kuttl-tests/keystone-7659668474-6698l" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.226363 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/791f5002-b2b5-488c-99c8-5ed511cffed2-credential-keys\") pod \"keystone-7659668474-6698l\" (UID: \"791f5002-b2b5-488c-99c8-5ed511cffed2\") " pod="swift-kuttl-tests/keystone-7659668474-6698l" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.226385 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/791f5002-b2b5-488c-99c8-5ed511cffed2-config-data\") pod \"keystone-7659668474-6698l\" (UID: \"791f5002-b2b5-488c-99c8-5ed511cffed2\") " pod="swift-kuttl-tests/keystone-7659668474-6698l" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.226417 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htfnp\" (UniqueName: \"kubernetes.io/projected/791f5002-b2b5-488c-99c8-5ed511cffed2-kube-api-access-htfnp\") pod \"keystone-7659668474-6698l\" (UID: \"791f5002-b2b5-488c-99c8-5ed511cffed2\") " pod="swift-kuttl-tests/keystone-7659668474-6698l" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.231679 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/791f5002-b2b5-488c-99c8-5ed511cffed2-fernet-keys\") pod \"keystone-7659668474-6698l\" (UID: \"791f5002-b2b5-488c-99c8-5ed511cffed2\") " pod="swift-kuttl-tests/keystone-7659668474-6698l" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.232410 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/791f5002-b2b5-488c-99c8-5ed511cffed2-scripts\") pod \"keystone-7659668474-6698l\" (UID: \"791f5002-b2b5-488c-99c8-5ed511cffed2\") " pod="swift-kuttl-tests/keystone-7659668474-6698l" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.233156 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/791f5002-b2b5-488c-99c8-5ed511cffed2-credential-keys\") pod \"keystone-7659668474-6698l\" (UID: \"791f5002-b2b5-488c-99c8-5ed511cffed2\") " pod="swift-kuttl-tests/keystone-7659668474-6698l" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.234793 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/791f5002-b2b5-488c-99c8-5ed511cffed2-config-data\") pod \"keystone-7659668474-6698l\" (UID: \"791f5002-b2b5-488c-99c8-5ed511cffed2\") " pod="swift-kuttl-tests/keystone-7659668474-6698l" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.256565 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htfnp\" (UniqueName: \"kubernetes.io/projected/791f5002-b2b5-488c-99c8-5ed511cffed2-kube-api-access-htfnp\") pod \"keystone-7659668474-6698l\" (UID: \"791f5002-b2b5-488c-99c8-5ed511cffed2\") " pod="swift-kuttl-tests/keystone-7659668474-6698l" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.324126 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-7659668474-6698l" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.591479 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-7659668474-6698l"] Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.885684 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-7659668474-6698l" event={"ID":"791f5002-b2b5-488c-99c8-5ed511cffed2","Type":"ContainerStarted","Data":"b5a67cda7cedf37898174bd8c26fa8b6ca9c316498d1a188eafaf7a0a061fb31"} Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.885767 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-7659668474-6698l" event={"ID":"791f5002-b2b5-488c-99c8-5ed511cffed2","Type":"ContainerStarted","Data":"ebf8be13087df49c14a90235b8a6897359f30896de4cc0d771817a61a26e48a4"} Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.885806 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/keystone-7659668474-6698l" Jan 31 15:11:36 crc kubenswrapper[4763]: I0131 15:11:36.059284 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/keystone-7659668474-6698l" podStartSLOduration=12.059267592 podStartE2EDuration="12.059267592s" podCreationTimestamp="2026-01-31 15:11:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:11:25.903102047 +0000 UTC m=+1005.657840350" watchObservedRunningTime="2026-01-31 15:11:36.059267592 +0000 UTC m=+1015.814005885" Jan 31 15:11:36 crc kubenswrapper[4763]: I0131 15:11:36.060014 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-56656dfbf6-dtcnh"] Jan 31 15:11:36 crc kubenswrapper[4763]: I0131 15:11:36.060791 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-56656dfbf6-dtcnh" Jan 31 15:11:36 crc kubenswrapper[4763]: I0131 15:11:36.062564 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-service-cert" Jan 31 15:11:36 crc kubenswrapper[4763]: I0131 15:11:36.064575 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-9gltf" Jan 31 15:11:36 crc kubenswrapper[4763]: I0131 15:11:36.072719 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-56656dfbf6-dtcnh"] Jan 31 15:11:36 crc kubenswrapper[4763]: I0131 15:11:36.211604 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5-webhook-cert\") pod \"barbican-operator-controller-manager-56656dfbf6-dtcnh\" (UID: \"8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5\") " pod="openstack-operators/barbican-operator-controller-manager-56656dfbf6-dtcnh" Jan 31 15:11:36 crc kubenswrapper[4763]: I0131 15:11:36.211935 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5-apiservice-cert\") pod \"barbican-operator-controller-manager-56656dfbf6-dtcnh\" (UID: \"8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5\") " pod="openstack-operators/barbican-operator-controller-manager-56656dfbf6-dtcnh" Jan 31 15:11:36 crc kubenswrapper[4763]: I0131 15:11:36.211995 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq8x5\" (UniqueName: \"kubernetes.io/projected/8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5-kube-api-access-wq8x5\") pod \"barbican-operator-controller-manager-56656dfbf6-dtcnh\" (UID: \"8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5\") " pod="openstack-operators/barbican-operator-controller-manager-56656dfbf6-dtcnh" Jan 31 15:11:36 crc kubenswrapper[4763]: I0131 15:11:36.313306 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq8x5\" (UniqueName: \"kubernetes.io/projected/8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5-kube-api-access-wq8x5\") pod \"barbican-operator-controller-manager-56656dfbf6-dtcnh\" (UID: \"8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5\") " pod="openstack-operators/barbican-operator-controller-manager-56656dfbf6-dtcnh" Jan 31 15:11:36 crc kubenswrapper[4763]: I0131 15:11:36.313378 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5-webhook-cert\") pod \"barbican-operator-controller-manager-56656dfbf6-dtcnh\" (UID: \"8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5\") " pod="openstack-operators/barbican-operator-controller-manager-56656dfbf6-dtcnh" Jan 31 15:11:36 crc kubenswrapper[4763]: I0131 15:11:36.313408 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5-apiservice-cert\") pod \"barbican-operator-controller-manager-56656dfbf6-dtcnh\" (UID: \"8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5\") " pod="openstack-operators/barbican-operator-controller-manager-56656dfbf6-dtcnh" Jan 31 15:11:36 crc kubenswrapper[4763]: I0131 15:11:36.323478 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5-apiservice-cert\") pod \"barbican-operator-controller-manager-56656dfbf6-dtcnh\" (UID: \"8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5\") " pod="openstack-operators/barbican-operator-controller-manager-56656dfbf6-dtcnh" Jan 31 15:11:36 crc kubenswrapper[4763]: I0131 15:11:36.323516 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5-webhook-cert\") pod \"barbican-operator-controller-manager-56656dfbf6-dtcnh\" (UID: \"8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5\") " pod="openstack-operators/barbican-operator-controller-manager-56656dfbf6-dtcnh" Jan 31 15:11:36 crc kubenswrapper[4763]: I0131 15:11:36.340229 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq8x5\" (UniqueName: \"kubernetes.io/projected/8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5-kube-api-access-wq8x5\") pod \"barbican-operator-controller-manager-56656dfbf6-dtcnh\" (UID: \"8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5\") " pod="openstack-operators/barbican-operator-controller-manager-56656dfbf6-dtcnh" Jan 31 15:11:36 crc kubenswrapper[4763]: I0131 15:11:36.390600 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-56656dfbf6-dtcnh" Jan 31 15:11:36 crc kubenswrapper[4763]: I0131 15:11:36.824671 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-56656dfbf6-dtcnh"] Jan 31 15:11:36 crc kubenswrapper[4763]: I0131 15:11:36.836141 4763 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 15:11:36 crc kubenswrapper[4763]: I0131 15:11:36.982057 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-56656dfbf6-dtcnh" event={"ID":"8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5","Type":"ContainerStarted","Data":"19a6b8b6f0d398d2b52a9e74b825a08d77af27515af66305ec6ec80ffaf21b75"} Jan 31 15:11:40 crc kubenswrapper[4763]: I0131 15:11:40.004043 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-56656dfbf6-dtcnh" event={"ID":"8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5","Type":"ContainerStarted","Data":"f3133d68dda34dc46cfc087e9ca55b949f26235398e1875b2cdef3729b25ae32"} Jan 31 15:11:40 crc kubenswrapper[4763]: I0131 15:11:40.004687 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-56656dfbf6-dtcnh" Jan 31 15:11:40 crc kubenswrapper[4763]: I0131 15:11:40.029203 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-56656dfbf6-dtcnh" podStartSLOduration=1.590889644 podStartE2EDuration="4.029187769s" podCreationTimestamp="2026-01-31 15:11:36 +0000 UTC" firstStartedPulling="2026-01-31 15:11:36.835819787 +0000 UTC m=+1016.590558080" lastFinishedPulling="2026-01-31 15:11:39.274117912 +0000 UTC m=+1019.028856205" observedRunningTime="2026-01-31 15:11:40.025260725 +0000 UTC m=+1019.779999028" watchObservedRunningTime="2026-01-31 15:11:40.029187769 +0000 UTC m=+1019.783926072" Jan 31 15:11:46 crc kubenswrapper[4763]: I0131 15:11:46.396513 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-56656dfbf6-dtcnh" Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.626107 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-db-create-fxrtm"] Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.628504 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-create-fxrtm" Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.633950 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-0181-account-create-update-96pj2"] Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.635292 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-0181-account-create-update-96pj2" Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.636837 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-db-secret" Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.641764 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-db-create-fxrtm"] Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.668901 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-0181-account-create-update-96pj2"] Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.730658 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/464b92bd-fb87-4fc5-aa90-5460b1e35eec-operator-scripts\") pod \"barbican-db-create-fxrtm\" (UID: \"464b92bd-fb87-4fc5-aa90-5460b1e35eec\") " pod="swift-kuttl-tests/barbican-db-create-fxrtm" Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.730823 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtw9n\" (UniqueName: \"kubernetes.io/projected/be55f4fd-e12f-4bcf-ab19-d71977f3e6ec-kube-api-access-rtw9n\") pod \"barbican-0181-account-create-update-96pj2\" (UID: \"be55f4fd-e12f-4bcf-ab19-d71977f3e6ec\") " pod="swift-kuttl-tests/barbican-0181-account-create-update-96pj2" Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.731043 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be55f4fd-e12f-4bcf-ab19-d71977f3e6ec-operator-scripts\") pod \"barbican-0181-account-create-update-96pj2\" (UID: \"be55f4fd-e12f-4bcf-ab19-d71977f3e6ec\") " pod="swift-kuttl-tests/barbican-0181-account-create-update-96pj2" Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.731095 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkcmf\" (UniqueName: \"kubernetes.io/projected/464b92bd-fb87-4fc5-aa90-5460b1e35eec-kube-api-access-dkcmf\") pod \"barbican-db-create-fxrtm\" (UID: \"464b92bd-fb87-4fc5-aa90-5460b1e35eec\") " pod="swift-kuttl-tests/barbican-db-create-fxrtm" Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.784683 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/keystone-7659668474-6698l" Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.832851 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/464b92bd-fb87-4fc5-aa90-5460b1e35eec-operator-scripts\") pod \"barbican-db-create-fxrtm\" (UID: \"464b92bd-fb87-4fc5-aa90-5460b1e35eec\") " pod="swift-kuttl-tests/barbican-db-create-fxrtm" Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.833122 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtw9n\" (UniqueName: \"kubernetes.io/projected/be55f4fd-e12f-4bcf-ab19-d71977f3e6ec-kube-api-access-rtw9n\") pod \"barbican-0181-account-create-update-96pj2\" (UID: \"be55f4fd-e12f-4bcf-ab19-d71977f3e6ec\") " pod="swift-kuttl-tests/barbican-0181-account-create-update-96pj2" Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.833252 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be55f4fd-e12f-4bcf-ab19-d71977f3e6ec-operator-scripts\") pod \"barbican-0181-account-create-update-96pj2\" (UID: \"be55f4fd-e12f-4bcf-ab19-d71977f3e6ec\") " pod="swift-kuttl-tests/barbican-0181-account-create-update-96pj2" Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.833326 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkcmf\" (UniqueName: \"kubernetes.io/projected/464b92bd-fb87-4fc5-aa90-5460b1e35eec-kube-api-access-dkcmf\") pod \"barbican-db-create-fxrtm\" (UID: \"464b92bd-fb87-4fc5-aa90-5460b1e35eec\") " pod="swift-kuttl-tests/barbican-db-create-fxrtm" Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.833968 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be55f4fd-e12f-4bcf-ab19-d71977f3e6ec-operator-scripts\") pod \"barbican-0181-account-create-update-96pj2\" (UID: \"be55f4fd-e12f-4bcf-ab19-d71977f3e6ec\") " pod="swift-kuttl-tests/barbican-0181-account-create-update-96pj2" Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.834911 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/464b92bd-fb87-4fc5-aa90-5460b1e35eec-operator-scripts\") pod \"barbican-db-create-fxrtm\" (UID: \"464b92bd-fb87-4fc5-aa90-5460b1e35eec\") " pod="swift-kuttl-tests/barbican-db-create-fxrtm" Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.853749 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkcmf\" (UniqueName: \"kubernetes.io/projected/464b92bd-fb87-4fc5-aa90-5460b1e35eec-kube-api-access-dkcmf\") pod \"barbican-db-create-fxrtm\" (UID: \"464b92bd-fb87-4fc5-aa90-5460b1e35eec\") " pod="swift-kuttl-tests/barbican-db-create-fxrtm" Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.861347 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtw9n\" (UniqueName: \"kubernetes.io/projected/be55f4fd-e12f-4bcf-ab19-d71977f3e6ec-kube-api-access-rtw9n\") pod \"barbican-0181-account-create-update-96pj2\" (UID: \"be55f4fd-e12f-4bcf-ab19-d71977f3e6ec\") " pod="swift-kuttl-tests/barbican-0181-account-create-update-96pj2" Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.951516 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-create-fxrtm" Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.963734 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-0181-account-create-update-96pj2" Jan 31 15:11:57 crc kubenswrapper[4763]: I0131 15:11:57.480630 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-db-create-fxrtm"] Jan 31 15:11:57 crc kubenswrapper[4763]: W0131 15:11:57.481332 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod464b92bd_fb87_4fc5_aa90_5460b1e35eec.slice/crio-2b9f1a613814224afc6183a529e2db1d1f86c3eb7cdbd5b9033f065ce6895594 WatchSource:0}: Error finding container 2b9f1a613814224afc6183a529e2db1d1f86c3eb7cdbd5b9033f065ce6895594: Status 404 returned error can't find the container with id 2b9f1a613814224afc6183a529e2db1d1f86c3eb7cdbd5b9033f065ce6895594 Jan 31 15:11:57 crc kubenswrapper[4763]: I0131 15:11:57.566247 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-0181-account-create-update-96pj2"] Jan 31 15:11:57 crc kubenswrapper[4763]: W0131 15:11:57.574569 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe55f4fd_e12f_4bcf_ab19_d71977f3e6ec.slice/crio-e6c361fc8437a025933c599cfb9a4239985b83b89d07f5332e2973ff8dc6920c WatchSource:0}: Error finding container e6c361fc8437a025933c599cfb9a4239985b83b89d07f5332e2973ff8dc6920c: Status 404 returned error can't find the container with id e6c361fc8437a025933c599cfb9a4239985b83b89d07f5332e2973ff8dc6920c Jan 31 15:11:58 crc kubenswrapper[4763]: I0131 15:11:58.148184 4763 generic.go:334] "Generic (PLEG): container finished" podID="be55f4fd-e12f-4bcf-ab19-d71977f3e6ec" containerID="a540ed0c915c9ec8346e959a99b0e8cef75297ffb67063cbd5e427a00b227441" exitCode=0 Jan 31 15:11:58 crc kubenswrapper[4763]: I0131 15:11:58.148278 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-0181-account-create-update-96pj2" event={"ID":"be55f4fd-e12f-4bcf-ab19-d71977f3e6ec","Type":"ContainerDied","Data":"a540ed0c915c9ec8346e959a99b0e8cef75297ffb67063cbd5e427a00b227441"} Jan 31 15:11:58 crc kubenswrapper[4763]: I0131 15:11:58.148429 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-0181-account-create-update-96pj2" event={"ID":"be55f4fd-e12f-4bcf-ab19-d71977f3e6ec","Type":"ContainerStarted","Data":"e6c361fc8437a025933c599cfb9a4239985b83b89d07f5332e2973ff8dc6920c"} Jan 31 15:11:58 crc kubenswrapper[4763]: I0131 15:11:58.150404 4763 generic.go:334] "Generic (PLEG): container finished" podID="464b92bd-fb87-4fc5-aa90-5460b1e35eec" containerID="fee6200b81ed36139edc76fff1de6f650a35a10f71c9521569ecb3d4c7be34df" exitCode=0 Jan 31 15:11:58 crc kubenswrapper[4763]: I0131 15:11:58.150456 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-create-fxrtm" event={"ID":"464b92bd-fb87-4fc5-aa90-5460b1e35eec","Type":"ContainerDied","Data":"fee6200b81ed36139edc76fff1de6f650a35a10f71c9521569ecb3d4c7be34df"} Jan 31 15:11:58 crc kubenswrapper[4763]: I0131 15:11:58.150526 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-create-fxrtm" event={"ID":"464b92bd-fb87-4fc5-aa90-5460b1e35eec","Type":"ContainerStarted","Data":"2b9f1a613814224afc6183a529e2db1d1f86c3eb7cdbd5b9033f065ce6895594"} Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.348666 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-index-h5chr"] Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.349793 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-h5chr" Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.352424 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-index-dockercfg-mpf7p" Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.371644 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-index-h5chr"] Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.468228 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27sk4\" (UniqueName: \"kubernetes.io/projected/2c571391-06de-46b1-8932-99d44a63dc42-kube-api-access-27sk4\") pod \"swift-operator-index-h5chr\" (UID: \"2c571391-06de-46b1-8932-99d44a63dc42\") " pod="openstack-operators/swift-operator-index-h5chr" Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.559972 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-create-fxrtm" Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.564575 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-0181-account-create-update-96pj2" Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.569656 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27sk4\" (UniqueName: \"kubernetes.io/projected/2c571391-06de-46b1-8932-99d44a63dc42-kube-api-access-27sk4\") pod \"swift-operator-index-h5chr\" (UID: \"2c571391-06de-46b1-8932-99d44a63dc42\") " pod="openstack-operators/swift-operator-index-h5chr" Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.623379 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27sk4\" (UniqueName: \"kubernetes.io/projected/2c571391-06de-46b1-8932-99d44a63dc42-kube-api-access-27sk4\") pod \"swift-operator-index-h5chr\" (UID: \"2c571391-06de-46b1-8932-99d44a63dc42\") " pod="openstack-operators/swift-operator-index-h5chr" Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.670759 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkcmf\" (UniqueName: \"kubernetes.io/projected/464b92bd-fb87-4fc5-aa90-5460b1e35eec-kube-api-access-dkcmf\") pod \"464b92bd-fb87-4fc5-aa90-5460b1e35eec\" (UID: \"464b92bd-fb87-4fc5-aa90-5460b1e35eec\") " Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.671068 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/464b92bd-fb87-4fc5-aa90-5460b1e35eec-operator-scripts\") pod \"464b92bd-fb87-4fc5-aa90-5460b1e35eec\" (UID: \"464b92bd-fb87-4fc5-aa90-5460b1e35eec\") " Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.671163 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be55f4fd-e12f-4bcf-ab19-d71977f3e6ec-operator-scripts\") pod \"be55f4fd-e12f-4bcf-ab19-d71977f3e6ec\" (UID: \"be55f4fd-e12f-4bcf-ab19-d71977f3e6ec\") " Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.671220 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtw9n\" (UniqueName: \"kubernetes.io/projected/be55f4fd-e12f-4bcf-ab19-d71977f3e6ec-kube-api-access-rtw9n\") pod \"be55f4fd-e12f-4bcf-ab19-d71977f3e6ec\" (UID: \"be55f4fd-e12f-4bcf-ab19-d71977f3e6ec\") " Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.671716 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/464b92bd-fb87-4fc5-aa90-5460b1e35eec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "464b92bd-fb87-4fc5-aa90-5460b1e35eec" (UID: "464b92bd-fb87-4fc5-aa90-5460b1e35eec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.671786 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be55f4fd-e12f-4bcf-ab19-d71977f3e6ec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "be55f4fd-e12f-4bcf-ab19-d71977f3e6ec" (UID: "be55f4fd-e12f-4bcf-ab19-d71977f3e6ec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.674471 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be55f4fd-e12f-4bcf-ab19-d71977f3e6ec-kube-api-access-rtw9n" (OuterVolumeSpecName: "kube-api-access-rtw9n") pod "be55f4fd-e12f-4bcf-ab19-d71977f3e6ec" (UID: "be55f4fd-e12f-4bcf-ab19-d71977f3e6ec"). InnerVolumeSpecName "kube-api-access-rtw9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.676174 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-h5chr" Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.676898 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/464b92bd-fb87-4fc5-aa90-5460b1e35eec-kube-api-access-dkcmf" (OuterVolumeSpecName: "kube-api-access-dkcmf") pod "464b92bd-fb87-4fc5-aa90-5460b1e35eec" (UID: "464b92bd-fb87-4fc5-aa90-5460b1e35eec"). InnerVolumeSpecName "kube-api-access-dkcmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.772859 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtw9n\" (UniqueName: \"kubernetes.io/projected/be55f4fd-e12f-4bcf-ab19-d71977f3e6ec-kube-api-access-rtw9n\") on node \"crc\" DevicePath \"\"" Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.772883 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkcmf\" (UniqueName: \"kubernetes.io/projected/464b92bd-fb87-4fc5-aa90-5460b1e35eec-kube-api-access-dkcmf\") on node \"crc\" DevicePath \"\"" Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.772892 4763 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/464b92bd-fb87-4fc5-aa90-5460b1e35eec-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.772899 4763 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be55f4fd-e12f-4bcf-ab19-d71977f3e6ec-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:12:00 crc kubenswrapper[4763]: I0131 15:12:00.109071 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-index-h5chr"] Jan 31 15:12:00 crc kubenswrapper[4763]: W0131 15:12:00.113476 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c571391_06de_46b1_8932_99d44a63dc42.slice/crio-7d69566b93d62e3d00554e026fc47d6e3957f18bf85461fc0a22a07c1e2974d9 WatchSource:0}: Error finding container 7d69566b93d62e3d00554e026fc47d6e3957f18bf85461fc0a22a07c1e2974d9: Status 404 returned error can't find the container with id 7d69566b93d62e3d00554e026fc47d6e3957f18bf85461fc0a22a07c1e2974d9 Jan 31 15:12:00 crc kubenswrapper[4763]: I0131 15:12:00.175845 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-h5chr" event={"ID":"2c571391-06de-46b1-8932-99d44a63dc42","Type":"ContainerStarted","Data":"7d69566b93d62e3d00554e026fc47d6e3957f18bf85461fc0a22a07c1e2974d9"} Jan 31 15:12:00 crc kubenswrapper[4763]: I0131 15:12:00.178616 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-create-fxrtm" Jan 31 15:12:00 crc kubenswrapper[4763]: I0131 15:12:00.178613 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-create-fxrtm" event={"ID":"464b92bd-fb87-4fc5-aa90-5460b1e35eec","Type":"ContainerDied","Data":"2b9f1a613814224afc6183a529e2db1d1f86c3eb7cdbd5b9033f065ce6895594"} Jan 31 15:12:00 crc kubenswrapper[4763]: I0131 15:12:00.178742 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b9f1a613814224afc6183a529e2db1d1f86c3eb7cdbd5b9033f065ce6895594" Jan 31 15:12:00 crc kubenswrapper[4763]: I0131 15:12:00.181605 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-0181-account-create-update-96pj2" event={"ID":"be55f4fd-e12f-4bcf-ab19-d71977f3e6ec","Type":"ContainerDied","Data":"e6c361fc8437a025933c599cfb9a4239985b83b89d07f5332e2973ff8dc6920c"} Jan 31 15:12:00 crc kubenswrapper[4763]: I0131 15:12:00.181645 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6c361fc8437a025933c599cfb9a4239985b83b89d07f5332e2973ff8dc6920c" Jan 31 15:12:00 crc kubenswrapper[4763]: I0131 15:12:00.181795 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-0181-account-create-update-96pj2" Jan 31 15:12:01 crc kubenswrapper[4763]: I0131 15:12:01.907346 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-db-sync-q2gqt"] Jan 31 15:12:01 crc kubenswrapper[4763]: E0131 15:12:01.908041 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be55f4fd-e12f-4bcf-ab19-d71977f3e6ec" containerName="mariadb-account-create-update" Jan 31 15:12:01 crc kubenswrapper[4763]: I0131 15:12:01.908052 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="be55f4fd-e12f-4bcf-ab19-d71977f3e6ec" containerName="mariadb-account-create-update" Jan 31 15:12:01 crc kubenswrapper[4763]: E0131 15:12:01.908073 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="464b92bd-fb87-4fc5-aa90-5460b1e35eec" containerName="mariadb-database-create" Jan 31 15:12:01 crc kubenswrapper[4763]: I0131 15:12:01.908098 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="464b92bd-fb87-4fc5-aa90-5460b1e35eec" containerName="mariadb-database-create" Jan 31 15:12:01 crc kubenswrapper[4763]: I0131 15:12:01.908203 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="464b92bd-fb87-4fc5-aa90-5460b1e35eec" containerName="mariadb-database-create" Jan 31 15:12:01 crc kubenswrapper[4763]: I0131 15:12:01.908215 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="be55f4fd-e12f-4bcf-ab19-d71977f3e6ec" containerName="mariadb-account-create-update" Jan 31 15:12:01 crc kubenswrapper[4763]: I0131 15:12:01.908610 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-sync-q2gqt" Jan 31 15:12:01 crc kubenswrapper[4763]: I0131 15:12:01.910832 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-config-data" Jan 31 15:12:01 crc kubenswrapper[4763]: I0131 15:12:01.910920 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-barbican-dockercfg-vtvb4" Jan 31 15:12:01 crc kubenswrapper[4763]: I0131 15:12:01.919254 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-db-sync-q2gqt"] Jan 31 15:12:02 crc kubenswrapper[4763]: I0131 15:12:02.018257 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d76ca4ae-ac08-455d-af41-ec673a980e8e-db-sync-config-data\") pod \"barbican-db-sync-q2gqt\" (UID: \"d76ca4ae-ac08-455d-af41-ec673a980e8e\") " pod="swift-kuttl-tests/barbican-db-sync-q2gqt" Jan 31 15:12:02 crc kubenswrapper[4763]: I0131 15:12:02.018332 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh6kg\" (UniqueName: \"kubernetes.io/projected/d76ca4ae-ac08-455d-af41-ec673a980e8e-kube-api-access-jh6kg\") pod \"barbican-db-sync-q2gqt\" (UID: \"d76ca4ae-ac08-455d-af41-ec673a980e8e\") " pod="swift-kuttl-tests/barbican-db-sync-q2gqt" Jan 31 15:12:02 crc kubenswrapper[4763]: I0131 15:12:02.119307 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d76ca4ae-ac08-455d-af41-ec673a980e8e-db-sync-config-data\") pod \"barbican-db-sync-q2gqt\" (UID: \"d76ca4ae-ac08-455d-af41-ec673a980e8e\") " pod="swift-kuttl-tests/barbican-db-sync-q2gqt" Jan 31 15:12:02 crc kubenswrapper[4763]: I0131 15:12:02.119346 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh6kg\" (UniqueName: \"kubernetes.io/projected/d76ca4ae-ac08-455d-af41-ec673a980e8e-kube-api-access-jh6kg\") pod \"barbican-db-sync-q2gqt\" (UID: \"d76ca4ae-ac08-455d-af41-ec673a980e8e\") " pod="swift-kuttl-tests/barbican-db-sync-q2gqt" Jan 31 15:12:02 crc kubenswrapper[4763]: I0131 15:12:02.134564 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d76ca4ae-ac08-455d-af41-ec673a980e8e-db-sync-config-data\") pod \"barbican-db-sync-q2gqt\" (UID: \"d76ca4ae-ac08-455d-af41-ec673a980e8e\") " pod="swift-kuttl-tests/barbican-db-sync-q2gqt" Jan 31 15:12:02 crc kubenswrapper[4763]: I0131 15:12:02.134597 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh6kg\" (UniqueName: \"kubernetes.io/projected/d76ca4ae-ac08-455d-af41-ec673a980e8e-kube-api-access-jh6kg\") pod \"barbican-db-sync-q2gqt\" (UID: \"d76ca4ae-ac08-455d-af41-ec673a980e8e\") " pod="swift-kuttl-tests/barbican-db-sync-q2gqt" Jan 31 15:12:02 crc kubenswrapper[4763]: I0131 15:12:02.233148 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-sync-q2gqt" Jan 31 15:12:03 crc kubenswrapper[4763]: I0131 15:12:03.056499 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-db-sync-q2gqt"] Jan 31 15:12:03 crc kubenswrapper[4763]: W0131 15:12:03.065477 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd76ca4ae_ac08_455d_af41_ec673a980e8e.slice/crio-ea89c28319b091916cba9282193936577d192707c220da6b00d746cd490c6871 WatchSource:0}: Error finding container ea89c28319b091916cba9282193936577d192707c220da6b00d746cd490c6871: Status 404 returned error can't find the container with id ea89c28319b091916cba9282193936577d192707c220da6b00d746cd490c6871 Jan 31 15:12:03 crc kubenswrapper[4763]: I0131 15:12:03.213176 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-sync-q2gqt" event={"ID":"d76ca4ae-ac08-455d-af41-ec673a980e8e","Type":"ContainerStarted","Data":"ea89c28319b091916cba9282193936577d192707c220da6b00d746cd490c6871"} Jan 31 15:12:03 crc kubenswrapper[4763]: I0131 15:12:03.214615 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-h5chr" event={"ID":"2c571391-06de-46b1-8932-99d44a63dc42","Type":"ContainerStarted","Data":"b7621ef6d6502af209e566c99a4a7e4d26b476519cfba05da9733d997636d2d0"} Jan 31 15:12:03 crc kubenswrapper[4763]: I0131 15:12:03.230053 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-index-h5chr" podStartSLOduration=1.7134884879999999 podStartE2EDuration="4.230038832s" podCreationTimestamp="2026-01-31 15:11:59 +0000 UTC" firstStartedPulling="2026-01-31 15:12:00.117349562 +0000 UTC m=+1039.872087855" lastFinishedPulling="2026-01-31 15:12:02.633899906 +0000 UTC m=+1042.388638199" observedRunningTime="2026-01-31 15:12:03.227400342 +0000 UTC m=+1042.982138645" watchObservedRunningTime="2026-01-31 15:12:03.230038832 +0000 UTC m=+1042.984777115" Jan 31 15:12:07 crc kubenswrapper[4763]: I0131 15:12:07.239732 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-sync-q2gqt" event={"ID":"d76ca4ae-ac08-455d-af41-ec673a980e8e","Type":"ContainerStarted","Data":"ad4a03306832e4128d821649efae8a9cb32add168b52d49dcd7430a5b6a1cda9"} Jan 31 15:12:07 crc kubenswrapper[4763]: I0131 15:12:07.263118 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/barbican-db-sync-q2gqt" podStartSLOduration=2.67715373 podStartE2EDuration="6.263086318s" podCreationTimestamp="2026-01-31 15:12:01 +0000 UTC" firstStartedPulling="2026-01-31 15:12:03.067599729 +0000 UTC m=+1042.822338012" lastFinishedPulling="2026-01-31 15:12:06.653532297 +0000 UTC m=+1046.408270600" observedRunningTime="2026-01-31 15:12:07.260225353 +0000 UTC m=+1047.014963686" watchObservedRunningTime="2026-01-31 15:12:07.263086318 +0000 UTC m=+1047.017824651" Jan 31 15:12:09 crc kubenswrapper[4763]: I0131 15:12:09.677021 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-index-h5chr" Jan 31 15:12:09 crc kubenswrapper[4763]: I0131 15:12:09.677385 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/swift-operator-index-h5chr" Jan 31 15:12:09 crc kubenswrapper[4763]: I0131 15:12:09.708075 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/swift-operator-index-h5chr" Jan 31 15:12:10 crc kubenswrapper[4763]: I0131 15:12:10.265352 4763 generic.go:334] "Generic (PLEG): container finished" podID="d76ca4ae-ac08-455d-af41-ec673a980e8e" containerID="ad4a03306832e4128d821649efae8a9cb32add168b52d49dcd7430a5b6a1cda9" exitCode=0 Jan 31 15:12:10 crc kubenswrapper[4763]: I0131 15:12:10.265478 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-sync-q2gqt" event={"ID":"d76ca4ae-ac08-455d-af41-ec673a980e8e","Type":"ContainerDied","Data":"ad4a03306832e4128d821649efae8a9cb32add168b52d49dcd7430a5b6a1cda9"} Jan 31 15:12:10 crc kubenswrapper[4763]: I0131 15:12:10.311616 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-index-h5chr" Jan 31 15:12:11 crc kubenswrapper[4763]: I0131 15:12:11.681475 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-sync-q2gqt" Jan 31 15:12:11 crc kubenswrapper[4763]: I0131 15:12:11.771971 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d76ca4ae-ac08-455d-af41-ec673a980e8e-db-sync-config-data\") pod \"d76ca4ae-ac08-455d-af41-ec673a980e8e\" (UID: \"d76ca4ae-ac08-455d-af41-ec673a980e8e\") " Jan 31 15:12:11 crc kubenswrapper[4763]: I0131 15:12:11.772182 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jh6kg\" (UniqueName: \"kubernetes.io/projected/d76ca4ae-ac08-455d-af41-ec673a980e8e-kube-api-access-jh6kg\") pod \"d76ca4ae-ac08-455d-af41-ec673a980e8e\" (UID: \"d76ca4ae-ac08-455d-af41-ec673a980e8e\") " Jan 31 15:12:11 crc kubenswrapper[4763]: I0131 15:12:11.778944 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d76ca4ae-ac08-455d-af41-ec673a980e8e-kube-api-access-jh6kg" (OuterVolumeSpecName: "kube-api-access-jh6kg") pod "d76ca4ae-ac08-455d-af41-ec673a980e8e" (UID: "d76ca4ae-ac08-455d-af41-ec673a980e8e"). InnerVolumeSpecName "kube-api-access-jh6kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:12:11 crc kubenswrapper[4763]: I0131 15:12:11.783134 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d76ca4ae-ac08-455d-af41-ec673a980e8e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d76ca4ae-ac08-455d-af41-ec673a980e8e" (UID: "d76ca4ae-ac08-455d-af41-ec673a980e8e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:12:11 crc kubenswrapper[4763]: I0131 15:12:11.874367 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jh6kg\" (UniqueName: \"kubernetes.io/projected/d76ca4ae-ac08-455d-af41-ec673a980e8e-kube-api-access-jh6kg\") on node \"crc\" DevicePath \"\"" Jan 31 15:12:11 crc kubenswrapper[4763]: I0131 15:12:11.874682 4763 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d76ca4ae-ac08-455d-af41-ec673a980e8e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.285568 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-sync-q2gqt" event={"ID":"d76ca4ae-ac08-455d-af41-ec673a980e8e","Type":"ContainerDied","Data":"ea89c28319b091916cba9282193936577d192707c220da6b00d746cd490c6871"} Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.285650 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea89c28319b091916cba9282193936577d192707c220da6b00d746cd490c6871" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.285675 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-sync-q2gqt" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.403858 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k"] Jan 31 15:12:12 crc kubenswrapper[4763]: E0131 15:12:12.404457 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d76ca4ae-ac08-455d-af41-ec673a980e8e" containerName="barbican-db-sync" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.404499 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="d76ca4ae-ac08-455d-af41-ec673a980e8e" containerName="barbican-db-sync" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.404880 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="d76ca4ae-ac08-455d-af41-ec673a980e8e" containerName="barbican-db-sync" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.407075 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.409383 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-rrv7w" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.412025 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k"] Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.482920 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7b615184-cd97-4133-b2e4-fc44e41d1e6b-util\") pod \"b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k\" (UID: \"7b615184-cd97-4133-b2e4-fc44e41d1e6b\") " pod="openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.482952 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7b615184-cd97-4133-b2e4-fc44e41d1e6b-bundle\") pod \"b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k\" (UID: \"7b615184-cd97-4133-b2e4-fc44e41d1e6b\") " pod="openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.483003 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hgpl\" (UniqueName: \"kubernetes.io/projected/7b615184-cd97-4133-b2e4-fc44e41d1e6b-kube-api-access-7hgpl\") pod \"b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k\" (UID: \"7b615184-cd97-4133-b2e4-fc44e41d1e6b\") " pod="openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.552847 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm"] Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.554946 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.558200 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-config-data" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.559070 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-worker-config-data" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.560088 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-barbican-dockercfg-vtvb4" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.586490 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7b615184-cd97-4133-b2e4-fc44e41d1e6b-util\") pod \"b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k\" (UID: \"7b615184-cd97-4133-b2e4-fc44e41d1e6b\") " pod="openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.586546 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7b615184-cd97-4133-b2e4-fc44e41d1e6b-bundle\") pod \"b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k\" (UID: \"7b615184-cd97-4133-b2e4-fc44e41d1e6b\") " pod="openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.586604 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hgpl\" (UniqueName: \"kubernetes.io/projected/7b615184-cd97-4133-b2e4-fc44e41d1e6b-kube-api-access-7hgpl\") pod \"b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k\" (UID: \"7b615184-cd97-4133-b2e4-fc44e41d1e6b\") " pod="openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.587544 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7b615184-cd97-4133-b2e4-fc44e41d1e6b-bundle\") pod \"b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k\" (UID: \"7b615184-cd97-4133-b2e4-fc44e41d1e6b\") " pod="openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.587650 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7b615184-cd97-4133-b2e4-fc44e41d1e6b-util\") pod \"b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k\" (UID: \"7b615184-cd97-4133-b2e4-fc44e41d1e6b\") " pod="openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.606520 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt"] Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.609483 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.612489 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-keystone-listener-config-data" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.617757 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hgpl\" (UniqueName: \"kubernetes.io/projected/7b615184-cd97-4133-b2e4-fc44e41d1e6b-kube-api-access-7hgpl\") pod \"b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k\" (UID: \"7b615184-cd97-4133-b2e4-fc44e41d1e6b\") " pod="openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.659401 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt"] Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.687640 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae9bd061-c69e-4ff5-acd4-2b953c4b1657-config-data-custom\") pod \"barbican-keystone-listener-5b8c7cdd44-cxsxt\" (UID: \"ae9bd061-c69e-4ff5-acd4-2b953c4b1657\") " pod="swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.687687 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49dd2bcf-ceb5-4df8-8a24-eec8de703f88-config-data\") pod \"barbican-worker-5477f7cb8f-8rssm\" (UID: \"49dd2bcf-ceb5-4df8-8a24-eec8de703f88\") " pod="swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.687733 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfsvg\" (UniqueName: \"kubernetes.io/projected/49dd2bcf-ceb5-4df8-8a24-eec8de703f88-kube-api-access-qfsvg\") pod \"barbican-worker-5477f7cb8f-8rssm\" (UID: \"49dd2bcf-ceb5-4df8-8a24-eec8de703f88\") " pod="swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.687756 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae9bd061-c69e-4ff5-acd4-2b953c4b1657-config-data\") pod \"barbican-keystone-listener-5b8c7cdd44-cxsxt\" (UID: \"ae9bd061-c69e-4ff5-acd4-2b953c4b1657\") " pod="swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.687799 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49dd2bcf-ceb5-4df8-8a24-eec8de703f88-logs\") pod \"barbican-worker-5477f7cb8f-8rssm\" (UID: \"49dd2bcf-ceb5-4df8-8a24-eec8de703f88\") " pod="swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.687821 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae9bd061-c69e-4ff5-acd4-2b953c4b1657-logs\") pod \"barbican-keystone-listener-5b8c7cdd44-cxsxt\" (UID: \"ae9bd061-c69e-4ff5-acd4-2b953c4b1657\") " pod="swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.687848 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49dd2bcf-ceb5-4df8-8a24-eec8de703f88-config-data-custom\") pod \"barbican-worker-5477f7cb8f-8rssm\" (UID: \"49dd2bcf-ceb5-4df8-8a24-eec8de703f88\") " pod="swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.687878 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nln7z\" (UniqueName: \"kubernetes.io/projected/ae9bd061-c69e-4ff5-acd4-2b953c4b1657-kube-api-access-nln7z\") pod \"barbican-keystone-listener-5b8c7cdd44-cxsxt\" (UID: \"ae9bd061-c69e-4ff5-acd4-2b953c4b1657\") " pod="swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.694929 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm"] Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.731294 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.788993 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nln7z\" (UniqueName: \"kubernetes.io/projected/ae9bd061-c69e-4ff5-acd4-2b953c4b1657-kube-api-access-nln7z\") pod \"barbican-keystone-listener-5b8c7cdd44-cxsxt\" (UID: \"ae9bd061-c69e-4ff5-acd4-2b953c4b1657\") " pod="swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.789060 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae9bd061-c69e-4ff5-acd4-2b953c4b1657-config-data-custom\") pod \"barbican-keystone-listener-5b8c7cdd44-cxsxt\" (UID: \"ae9bd061-c69e-4ff5-acd4-2b953c4b1657\") " pod="swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.789087 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49dd2bcf-ceb5-4df8-8a24-eec8de703f88-config-data\") pod \"barbican-worker-5477f7cb8f-8rssm\" (UID: \"49dd2bcf-ceb5-4df8-8a24-eec8de703f88\") " pod="swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.789105 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfsvg\" (UniqueName: \"kubernetes.io/projected/49dd2bcf-ceb5-4df8-8a24-eec8de703f88-kube-api-access-qfsvg\") pod \"barbican-worker-5477f7cb8f-8rssm\" (UID: \"49dd2bcf-ceb5-4df8-8a24-eec8de703f88\") " pod="swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.789127 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae9bd061-c69e-4ff5-acd4-2b953c4b1657-config-data\") pod \"barbican-keystone-listener-5b8c7cdd44-cxsxt\" (UID: \"ae9bd061-c69e-4ff5-acd4-2b953c4b1657\") " pod="swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.789162 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49dd2bcf-ceb5-4df8-8a24-eec8de703f88-logs\") pod \"barbican-worker-5477f7cb8f-8rssm\" (UID: \"49dd2bcf-ceb5-4df8-8a24-eec8de703f88\") " pod="swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.789189 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae9bd061-c69e-4ff5-acd4-2b953c4b1657-logs\") pod \"barbican-keystone-listener-5b8c7cdd44-cxsxt\" (UID: \"ae9bd061-c69e-4ff5-acd4-2b953c4b1657\") " pod="swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.789752 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49dd2bcf-ceb5-4df8-8a24-eec8de703f88-config-data-custom\") pod \"barbican-worker-5477f7cb8f-8rssm\" (UID: \"49dd2bcf-ceb5-4df8-8a24-eec8de703f88\") " pod="swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.789814 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae9bd061-c69e-4ff5-acd4-2b953c4b1657-logs\") pod \"barbican-keystone-listener-5b8c7cdd44-cxsxt\" (UID: \"ae9bd061-c69e-4ff5-acd4-2b953c4b1657\") " pod="swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.789818 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49dd2bcf-ceb5-4df8-8a24-eec8de703f88-logs\") pod \"barbican-worker-5477f7cb8f-8rssm\" (UID: \"49dd2bcf-ceb5-4df8-8a24-eec8de703f88\") " pod="swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.794093 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae9bd061-c69e-4ff5-acd4-2b953c4b1657-config-data\") pod \"barbican-keystone-listener-5b8c7cdd44-cxsxt\" (UID: \"ae9bd061-c69e-4ff5-acd4-2b953c4b1657\") " pod="swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.794368 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49dd2bcf-ceb5-4df8-8a24-eec8de703f88-config-data\") pod \"barbican-worker-5477f7cb8f-8rssm\" (UID: \"49dd2bcf-ceb5-4df8-8a24-eec8de703f88\") " pod="swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.795170 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae9bd061-c69e-4ff5-acd4-2b953c4b1657-config-data-custom\") pod \"barbican-keystone-listener-5b8c7cdd44-cxsxt\" (UID: \"ae9bd061-c69e-4ff5-acd4-2b953c4b1657\") " pod="swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.795360 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49dd2bcf-ceb5-4df8-8a24-eec8de703f88-config-data-custom\") pod \"barbican-worker-5477f7cb8f-8rssm\" (UID: \"49dd2bcf-ceb5-4df8-8a24-eec8de703f88\") " pod="swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.811418 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfsvg\" (UniqueName: \"kubernetes.io/projected/49dd2bcf-ceb5-4df8-8a24-eec8de703f88-kube-api-access-qfsvg\") pod \"barbican-worker-5477f7cb8f-8rssm\" (UID: \"49dd2bcf-ceb5-4df8-8a24-eec8de703f88\") " pod="swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.812028 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nln7z\" (UniqueName: \"kubernetes.io/projected/ae9bd061-c69e-4ff5-acd4-2b953c4b1657-kube-api-access-nln7z\") pod \"barbican-keystone-listener-5b8c7cdd44-cxsxt\" (UID: \"ae9bd061-c69e-4ff5-acd4-2b953c4b1657\") " pod="swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.820748 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-api-697dc779fb-sgr8v"] Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.821789 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.823877 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-api-config-data" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.833420 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-api-697dc779fb-sgr8v"] Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.890915 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl6nj\" (UniqueName: \"kubernetes.io/projected/5d7c9f19-bf9f-4c6c-a113-a10d6be02620-kube-api-access-hl6nj\") pod \"barbican-api-697dc779fb-sgr8v\" (UID: \"5d7c9f19-bf9f-4c6c-a113-a10d6be02620\") " pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.891011 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d7c9f19-bf9f-4c6c-a113-a10d6be02620-config-data-custom\") pod \"barbican-api-697dc779fb-sgr8v\" (UID: \"5d7c9f19-bf9f-4c6c-a113-a10d6be02620\") " pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.891034 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d7c9f19-bf9f-4c6c-a113-a10d6be02620-logs\") pod \"barbican-api-697dc779fb-sgr8v\" (UID: \"5d7c9f19-bf9f-4c6c-a113-a10d6be02620\") " pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.891115 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d7c9f19-bf9f-4c6c-a113-a10d6be02620-config-data\") pod \"barbican-api-697dc779fb-sgr8v\" (UID: \"5d7c9f19-bf9f-4c6c-a113-a10d6be02620\") " pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.977195 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.984193 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.992199 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d7c9f19-bf9f-4c6c-a113-a10d6be02620-config-data\") pod \"barbican-api-697dc779fb-sgr8v\" (UID: \"5d7c9f19-bf9f-4c6c-a113-a10d6be02620\") " pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.992273 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl6nj\" (UniqueName: \"kubernetes.io/projected/5d7c9f19-bf9f-4c6c-a113-a10d6be02620-kube-api-access-hl6nj\") pod \"barbican-api-697dc779fb-sgr8v\" (UID: \"5d7c9f19-bf9f-4c6c-a113-a10d6be02620\") " pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.992335 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d7c9f19-bf9f-4c6c-a113-a10d6be02620-config-data-custom\") pod \"barbican-api-697dc779fb-sgr8v\" (UID: \"5d7c9f19-bf9f-4c6c-a113-a10d6be02620\") " pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.992355 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d7c9f19-bf9f-4c6c-a113-a10d6be02620-logs\") pod \"barbican-api-697dc779fb-sgr8v\" (UID: \"5d7c9f19-bf9f-4c6c-a113-a10d6be02620\") " pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.993175 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d7c9f19-bf9f-4c6c-a113-a10d6be02620-logs\") pod \"barbican-api-697dc779fb-sgr8v\" (UID: \"5d7c9f19-bf9f-4c6c-a113-a10d6be02620\") " pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.996673 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d7c9f19-bf9f-4c6c-a113-a10d6be02620-config-data\") pod \"barbican-api-697dc779fb-sgr8v\" (UID: \"5d7c9f19-bf9f-4c6c-a113-a10d6be02620\") " pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.996957 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d7c9f19-bf9f-4c6c-a113-a10d6be02620-config-data-custom\") pod \"barbican-api-697dc779fb-sgr8v\" (UID: \"5d7c9f19-bf9f-4c6c-a113-a10d6be02620\") " pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" Jan 31 15:12:13 crc kubenswrapper[4763]: I0131 15:12:13.007652 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl6nj\" (UniqueName: \"kubernetes.io/projected/5d7c9f19-bf9f-4c6c-a113-a10d6be02620-kube-api-access-hl6nj\") pod \"barbican-api-697dc779fb-sgr8v\" (UID: \"5d7c9f19-bf9f-4c6c-a113-a10d6be02620\") " pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" Jan 31 15:12:13 crc kubenswrapper[4763]: I0131 15:12:13.197868 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" Jan 31 15:12:13 crc kubenswrapper[4763]: I0131 15:12:13.210208 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k"] Jan 31 15:12:13 crc kubenswrapper[4763]: I0131 15:12:13.297623 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k" event={"ID":"7b615184-cd97-4133-b2e4-fc44e41d1e6b","Type":"ContainerStarted","Data":"82f049d2d2d9f665f4c95a5038e4786af0956c4319bb2f39dcf6a264d7aa9e98"} Jan 31 15:12:13 crc kubenswrapper[4763]: I0131 15:12:13.384361 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm"] Jan 31 15:12:13 crc kubenswrapper[4763]: W0131 15:12:13.394170 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49dd2bcf_ceb5_4df8_8a24_eec8de703f88.slice/crio-79a9272bfc6e8c88b0d0981a9dd45e2e14d6f8213da1ce55bd59e57078037da6 WatchSource:0}: Error finding container 79a9272bfc6e8c88b0d0981a9dd45e2e14d6f8213da1ce55bd59e57078037da6: Status 404 returned error can't find the container with id 79a9272bfc6e8c88b0d0981a9dd45e2e14d6f8213da1ce55bd59e57078037da6 Jan 31 15:12:13 crc kubenswrapper[4763]: I0131 15:12:13.451927 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt"] Jan 31 15:12:13 crc kubenswrapper[4763]: W0131 15:12:13.504616 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae9bd061_c69e_4ff5_acd4_2b953c4b1657.slice/crio-73f1783cc2402a9a4768c2a16b7b7a92bcd1dd342d279be98a5ede23e82ba662 WatchSource:0}: Error finding container 73f1783cc2402a9a4768c2a16b7b7a92bcd1dd342d279be98a5ede23e82ba662: Status 404 returned error can't find the container with id 73f1783cc2402a9a4768c2a16b7b7a92bcd1dd342d279be98a5ede23e82ba662 Jan 31 15:12:13 crc kubenswrapper[4763]: I0131 15:12:13.621995 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-api-697dc779fb-sgr8v"] Jan 31 15:12:13 crc kubenswrapper[4763]: W0131 15:12:13.624857 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d7c9f19_bf9f_4c6c_a113_a10d6be02620.slice/crio-22bc3eac2bf0850b76d7244dae6bada86563af8adaa101368070b1a3efe3c4a8 WatchSource:0}: Error finding container 22bc3eac2bf0850b76d7244dae6bada86563af8adaa101368070b1a3efe3c4a8: Status 404 returned error can't find the container with id 22bc3eac2bf0850b76d7244dae6bada86563af8adaa101368070b1a3efe3c4a8 Jan 31 15:12:14 crc kubenswrapper[4763]: I0131 15:12:14.304894 4763 generic.go:334] "Generic (PLEG): container finished" podID="7b615184-cd97-4133-b2e4-fc44e41d1e6b" containerID="83f481ef83a36e058ad57e8887c57a159a7b965bf32a4e8fcc3c0e9d4eb867c4" exitCode=0 Jan 31 15:12:14 crc kubenswrapper[4763]: I0131 15:12:14.305507 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k" event={"ID":"7b615184-cd97-4133-b2e4-fc44e41d1e6b","Type":"ContainerDied","Data":"83f481ef83a36e058ad57e8887c57a159a7b965bf32a4e8fcc3c0e9d4eb867c4"} Jan 31 15:12:14 crc kubenswrapper[4763]: I0131 15:12:14.307126 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm" event={"ID":"49dd2bcf-ceb5-4df8-8a24-eec8de703f88","Type":"ContainerStarted","Data":"79a9272bfc6e8c88b0d0981a9dd45e2e14d6f8213da1ce55bd59e57078037da6"} Jan 31 15:12:14 crc kubenswrapper[4763]: I0131 15:12:14.309055 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" event={"ID":"5d7c9f19-bf9f-4c6c-a113-a10d6be02620","Type":"ContainerStarted","Data":"22bc3eac2bf0850b76d7244dae6bada86563af8adaa101368070b1a3efe3c4a8"} Jan 31 15:12:14 crc kubenswrapper[4763]: I0131 15:12:14.310237 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt" event={"ID":"ae9bd061-c69e-4ff5-acd4-2b953c4b1657","Type":"ContainerStarted","Data":"73f1783cc2402a9a4768c2a16b7b7a92bcd1dd342d279be98a5ede23e82ba662"} Jan 31 15:12:15 crc kubenswrapper[4763]: I0131 15:12:15.317189 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" event={"ID":"5d7c9f19-bf9f-4c6c-a113-a10d6be02620","Type":"ContainerStarted","Data":"9107d62a1e173e6aaf93f341f03623980f36c449de1070e7f85051dd0e0ed55a"} Jan 31 15:12:15 crc kubenswrapper[4763]: I0131 15:12:15.317509 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" Jan 31 15:12:15 crc kubenswrapper[4763]: I0131 15:12:15.317520 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" event={"ID":"5d7c9f19-bf9f-4c6c-a113-a10d6be02620","Type":"ContainerStarted","Data":"919605ae139098002974e02b46a5992073ecabcc697338106b310bef772d521a"} Jan 31 15:12:15 crc kubenswrapper[4763]: I0131 15:12:15.317530 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" Jan 31 15:12:15 crc kubenswrapper[4763]: I0131 15:12:15.338724 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" podStartSLOduration=3.338707713 podStartE2EDuration="3.338707713s" podCreationTimestamp="2026-01-31 15:12:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:12:15.334222755 +0000 UTC m=+1055.088961048" watchObservedRunningTime="2026-01-31 15:12:15.338707713 +0000 UTC m=+1055.093446006" Jan 31 15:12:16 crc kubenswrapper[4763]: I0131 15:12:16.330034 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm" event={"ID":"49dd2bcf-ceb5-4df8-8a24-eec8de703f88","Type":"ContainerStarted","Data":"eb595ea3d89a26b0075353f00f22467e1266f84f62274e060d46868558049b7f"} Jan 31 15:12:16 crc kubenswrapper[4763]: I0131 15:12:16.331493 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm" event={"ID":"49dd2bcf-ceb5-4df8-8a24-eec8de703f88","Type":"ContainerStarted","Data":"d0952e31ef39c9a55d20702a5d1138af741af92ad43e99993cb10dc96469f78a"} Jan 31 15:12:16 crc kubenswrapper[4763]: I0131 15:12:16.332413 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt" event={"ID":"ae9bd061-c69e-4ff5-acd4-2b953c4b1657","Type":"ContainerStarted","Data":"9fd92a3e1c9681ddc011cc682b6fc793279b23073d079a2f0d066f3d5e188268"} Jan 31 15:12:16 crc kubenswrapper[4763]: I0131 15:12:16.332471 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt" event={"ID":"ae9bd061-c69e-4ff5-acd4-2b953c4b1657","Type":"ContainerStarted","Data":"cca922b5b3603bdd61b9f82e387d61119f809e5dc027f28836b250484884c25c"} Jan 31 15:12:16 crc kubenswrapper[4763]: I0131 15:12:16.334251 4763 generic.go:334] "Generic (PLEG): container finished" podID="7b615184-cd97-4133-b2e4-fc44e41d1e6b" containerID="9c1b6dba72d2b968ba45652a3dc1383db9c2d8a84715b8baabd77b509878687e" exitCode=0 Jan 31 15:12:16 crc kubenswrapper[4763]: I0131 15:12:16.334317 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k" event={"ID":"7b615184-cd97-4133-b2e4-fc44e41d1e6b","Type":"ContainerDied","Data":"9c1b6dba72d2b968ba45652a3dc1383db9c2d8a84715b8baabd77b509878687e"} Jan 31 15:12:16 crc kubenswrapper[4763]: I0131 15:12:16.354479 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm" podStartSLOduration=1.98904389 podStartE2EDuration="4.354456589s" podCreationTimestamp="2026-01-31 15:12:12 +0000 UTC" firstStartedPulling="2026-01-31 15:12:13.400798123 +0000 UTC m=+1053.155536416" lastFinishedPulling="2026-01-31 15:12:15.766210782 +0000 UTC m=+1055.520949115" observedRunningTime="2026-01-31 15:12:16.351186443 +0000 UTC m=+1056.105924766" watchObservedRunningTime="2026-01-31 15:12:16.354456589 +0000 UTC m=+1056.109194882" Jan 31 15:12:16 crc kubenswrapper[4763]: I0131 15:12:16.402012 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt" podStartSLOduration=2.077802986 podStartE2EDuration="4.401963075s" podCreationTimestamp="2026-01-31 15:12:12 +0000 UTC" firstStartedPulling="2026-01-31 15:12:13.507824841 +0000 UTC m=+1053.262563134" lastFinishedPulling="2026-01-31 15:12:15.83198492 +0000 UTC m=+1055.586723223" observedRunningTime="2026-01-31 15:12:16.393922512 +0000 UTC m=+1056.148660875" watchObservedRunningTime="2026-01-31 15:12:16.401963075 +0000 UTC m=+1056.156701418" Jan 31 15:12:17 crc kubenswrapper[4763]: I0131 15:12:17.344287 4763 generic.go:334] "Generic (PLEG): container finished" podID="7b615184-cd97-4133-b2e4-fc44e41d1e6b" containerID="cba9275b9b94971ddb5814ace082a578caac0bca4a10413edc10e664eda952ea" exitCode=0 Jan 31 15:12:17 crc kubenswrapper[4763]: I0131 15:12:17.344339 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k" event={"ID":"7b615184-cd97-4133-b2e4-fc44e41d1e6b","Type":"ContainerDied","Data":"cba9275b9b94971ddb5814ace082a578caac0bca4a10413edc10e664eda952ea"} Jan 31 15:12:18 crc kubenswrapper[4763]: I0131 15:12:18.755336 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k" Jan 31 15:12:18 crc kubenswrapper[4763]: I0131 15:12:18.876872 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hgpl\" (UniqueName: \"kubernetes.io/projected/7b615184-cd97-4133-b2e4-fc44e41d1e6b-kube-api-access-7hgpl\") pod \"7b615184-cd97-4133-b2e4-fc44e41d1e6b\" (UID: \"7b615184-cd97-4133-b2e4-fc44e41d1e6b\") " Jan 31 15:12:18 crc kubenswrapper[4763]: I0131 15:12:18.877164 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7b615184-cd97-4133-b2e4-fc44e41d1e6b-util\") pod \"7b615184-cd97-4133-b2e4-fc44e41d1e6b\" (UID: \"7b615184-cd97-4133-b2e4-fc44e41d1e6b\") " Jan 31 15:12:18 crc kubenswrapper[4763]: I0131 15:12:18.877292 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7b615184-cd97-4133-b2e4-fc44e41d1e6b-bundle\") pod \"7b615184-cd97-4133-b2e4-fc44e41d1e6b\" (UID: \"7b615184-cd97-4133-b2e4-fc44e41d1e6b\") " Jan 31 15:12:18 crc kubenswrapper[4763]: I0131 15:12:18.878791 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b615184-cd97-4133-b2e4-fc44e41d1e6b-bundle" (OuterVolumeSpecName: "bundle") pod "7b615184-cd97-4133-b2e4-fc44e41d1e6b" (UID: "7b615184-cd97-4133-b2e4-fc44e41d1e6b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:12:18 crc kubenswrapper[4763]: I0131 15:12:18.885971 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b615184-cd97-4133-b2e4-fc44e41d1e6b-kube-api-access-7hgpl" (OuterVolumeSpecName: "kube-api-access-7hgpl") pod "7b615184-cd97-4133-b2e4-fc44e41d1e6b" (UID: "7b615184-cd97-4133-b2e4-fc44e41d1e6b"). InnerVolumeSpecName "kube-api-access-7hgpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:12:18 crc kubenswrapper[4763]: I0131 15:12:18.966650 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b615184-cd97-4133-b2e4-fc44e41d1e6b-util" (OuterVolumeSpecName: "util") pod "7b615184-cd97-4133-b2e4-fc44e41d1e6b" (UID: "7b615184-cd97-4133-b2e4-fc44e41d1e6b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:12:18 crc kubenswrapper[4763]: I0131 15:12:18.982385 4763 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7b615184-cd97-4133-b2e4-fc44e41d1e6b-util\") on node \"crc\" DevicePath \"\"" Jan 31 15:12:18 crc kubenswrapper[4763]: I0131 15:12:18.982437 4763 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7b615184-cd97-4133-b2e4-fc44e41d1e6b-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:12:18 crc kubenswrapper[4763]: I0131 15:12:18.982970 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hgpl\" (UniqueName: \"kubernetes.io/projected/7b615184-cd97-4133-b2e4-fc44e41d1e6b-kube-api-access-7hgpl\") on node \"crc\" DevicePath \"\"" Jan 31 15:12:19 crc kubenswrapper[4763]: I0131 15:12:19.361300 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k" event={"ID":"7b615184-cd97-4133-b2e4-fc44e41d1e6b","Type":"ContainerDied","Data":"82f049d2d2d9f665f4c95a5038e4786af0956c4319bb2f39dcf6a264d7aa9e98"} Jan 31 15:12:19 crc kubenswrapper[4763]: I0131 15:12:19.361748 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82f049d2d2d9f665f4c95a5038e4786af0956c4319bb2f39dcf6a264d7aa9e98" Jan 31 15:12:19 crc kubenswrapper[4763]: I0131 15:12:19.361533 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k" Jan 31 15:12:24 crc kubenswrapper[4763]: I0131 15:12:24.460725 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" Jan 31 15:12:24 crc kubenswrapper[4763]: I0131 15:12:24.631502 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.053561 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-77769db8d5-gb8pv"] Jan 31 15:12:33 crc kubenswrapper[4763]: E0131 15:12:33.054311 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b615184-cd97-4133-b2e4-fc44e41d1e6b" containerName="util" Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.054325 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b615184-cd97-4133-b2e4-fc44e41d1e6b" containerName="util" Jan 31 15:12:33 crc kubenswrapper[4763]: E0131 15:12:33.054361 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b615184-cd97-4133-b2e4-fc44e41d1e6b" containerName="pull" Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.054367 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b615184-cd97-4133-b2e4-fc44e41d1e6b" containerName="pull" Jan 31 15:12:33 crc kubenswrapper[4763]: E0131 15:12:33.054378 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b615184-cd97-4133-b2e4-fc44e41d1e6b" containerName="extract" Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.054384 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b615184-cd97-4133-b2e4-fc44e41d1e6b" containerName="extract" Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.054489 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b615184-cd97-4133-b2e4-fc44e41d1e6b" containerName="extract" Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.054970 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-77769db8d5-gb8pv" Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.057392 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-77769db8d5-gb8pv"] Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.059416 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-service-cert" Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.069663 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-bshht" Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.212873 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/42b142bb-6946-4933-841b-33c9fc9899b2-webhook-cert\") pod \"swift-operator-controller-manager-77769db8d5-gb8pv\" (UID: \"42b142bb-6946-4933-841b-33c9fc9899b2\") " pod="openstack-operators/swift-operator-controller-manager-77769db8d5-gb8pv" Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.212932 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42b142bb-6946-4933-841b-33c9fc9899b2-apiservice-cert\") pod \"swift-operator-controller-manager-77769db8d5-gb8pv\" (UID: \"42b142bb-6946-4933-841b-33c9fc9899b2\") " pod="openstack-operators/swift-operator-controller-manager-77769db8d5-gb8pv" Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.212973 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gs9c\" (UniqueName: \"kubernetes.io/projected/42b142bb-6946-4933-841b-33c9fc9899b2-kube-api-access-9gs9c\") pod \"swift-operator-controller-manager-77769db8d5-gb8pv\" (UID: \"42b142bb-6946-4933-841b-33c9fc9899b2\") " pod="openstack-operators/swift-operator-controller-manager-77769db8d5-gb8pv" Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.314108 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/42b142bb-6946-4933-841b-33c9fc9899b2-webhook-cert\") pod \"swift-operator-controller-manager-77769db8d5-gb8pv\" (UID: \"42b142bb-6946-4933-841b-33c9fc9899b2\") " pod="openstack-operators/swift-operator-controller-manager-77769db8d5-gb8pv" Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.314151 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42b142bb-6946-4933-841b-33c9fc9899b2-apiservice-cert\") pod \"swift-operator-controller-manager-77769db8d5-gb8pv\" (UID: \"42b142bb-6946-4933-841b-33c9fc9899b2\") " pod="openstack-operators/swift-operator-controller-manager-77769db8d5-gb8pv" Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.314187 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gs9c\" (UniqueName: \"kubernetes.io/projected/42b142bb-6946-4933-841b-33c9fc9899b2-kube-api-access-9gs9c\") pod \"swift-operator-controller-manager-77769db8d5-gb8pv\" (UID: \"42b142bb-6946-4933-841b-33c9fc9899b2\") " pod="openstack-operators/swift-operator-controller-manager-77769db8d5-gb8pv" Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.319917 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42b142bb-6946-4933-841b-33c9fc9899b2-apiservice-cert\") pod \"swift-operator-controller-manager-77769db8d5-gb8pv\" (UID: \"42b142bb-6946-4933-841b-33c9fc9899b2\") " pod="openstack-operators/swift-operator-controller-manager-77769db8d5-gb8pv" Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.326922 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/42b142bb-6946-4933-841b-33c9fc9899b2-webhook-cert\") pod \"swift-operator-controller-manager-77769db8d5-gb8pv\" (UID: \"42b142bb-6946-4933-841b-33c9fc9899b2\") " pod="openstack-operators/swift-operator-controller-manager-77769db8d5-gb8pv" Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.329214 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gs9c\" (UniqueName: \"kubernetes.io/projected/42b142bb-6946-4933-841b-33c9fc9899b2-kube-api-access-9gs9c\") pod \"swift-operator-controller-manager-77769db8d5-gb8pv\" (UID: \"42b142bb-6946-4933-841b-33c9fc9899b2\") " pod="openstack-operators/swift-operator-controller-manager-77769db8d5-gb8pv" Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.375358 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-77769db8d5-gb8pv" Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.822844 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-77769db8d5-gb8pv"] Jan 31 15:12:34 crc kubenswrapper[4763]: I0131 15:12:34.497535 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-77769db8d5-gb8pv" event={"ID":"42b142bb-6946-4933-841b-33c9fc9899b2","Type":"ContainerStarted","Data":"fb5a37d63efd9bc239a05a8decce5fea78aaa2d4553e227aa143fbaeedf0d19b"} Jan 31 15:12:35 crc kubenswrapper[4763]: I0131 15:12:35.505279 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-77769db8d5-gb8pv" event={"ID":"42b142bb-6946-4933-841b-33c9fc9899b2","Type":"ContainerStarted","Data":"6dd171ce46a904248293f1701ff9b40c2f4c21c03a535732238e23828992dcb5"} Jan 31 15:12:35 crc kubenswrapper[4763]: I0131 15:12:35.506224 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-77769db8d5-gb8pv" Jan 31 15:12:35 crc kubenswrapper[4763]: I0131 15:12:35.550076 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-77769db8d5-gb8pv" podStartSLOduration=1.251244353 podStartE2EDuration="2.550055032s" podCreationTimestamp="2026-01-31 15:12:33 +0000 UTC" firstStartedPulling="2026-01-31 15:12:33.835145476 +0000 UTC m=+1073.589883889" lastFinishedPulling="2026-01-31 15:12:35.133956275 +0000 UTC m=+1074.888694568" observedRunningTime="2026-01-31 15:12:35.521799106 +0000 UTC m=+1075.276537429" watchObservedRunningTime="2026-01-31 15:12:35.550055032 +0000 UTC m=+1075.304793335" Jan 31 15:12:43 crc kubenswrapper[4763]: I0131 15:12:43.380662 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-77769db8d5-gb8pv" Jan 31 15:12:44 crc kubenswrapper[4763]: I0131 15:12:44.177864 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:12:44 crc kubenswrapper[4763]: I0131 15:12:44.177958 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.239875 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.246873 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.248698 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-conf" Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.248722 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-files" Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.249218 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-storage-config-data" Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.251471 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-swift-dockercfg-47mrl" Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.263968 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.435297 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzq5q\" (UniqueName: \"kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-kube-api-access-fzq5q\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.435456 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-lock\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.435623 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.435822 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-cache\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.435851 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.537600 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.537745 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-cache\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.537778 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.537811 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzq5q\" (UniqueName: \"kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-kube-api-access-fzq5q\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.537876 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-lock\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.538093 4763 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") device mount path \"/mnt/openstack/pv01\"" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.538451 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-lock\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:12:58 crc kubenswrapper[4763]: E0131 15:12:58.538787 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:12:58 crc kubenswrapper[4763]: E0131 15:12:58.538827 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:12:58 crc kubenswrapper[4763]: E0131 15:12:58.538888 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift podName:6c4f98e3-1507-44ae-9eb7-2247ab0e37bf nodeName:}" failed. No retries permitted until 2026-01-31 15:12:59.038863053 +0000 UTC m=+1098.793601436 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift") pod "swift-storage-0" (UID: "6c4f98e3-1507-44ae-9eb7-2247ab0e37bf") : configmap "swift-ring-files" not found Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.538907 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-cache\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.567496 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzq5q\" (UniqueName: \"kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-kube-api-access-fzq5q\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.569081 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:12:59 crc kubenswrapper[4763]: I0131 15:12:59.045022 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:12:59 crc kubenswrapper[4763]: E0131 15:12:59.045179 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:12:59 crc kubenswrapper[4763]: E0131 15:12:59.045475 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:12:59 crc kubenswrapper[4763]: E0131 15:12:59.045551 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift podName:6c4f98e3-1507-44ae-9eb7-2247ab0e37bf nodeName:}" failed. No retries permitted until 2026-01-31 15:13:00.045512784 +0000 UTC m=+1099.800251077 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift") pod "swift-storage-0" (UID: "6c4f98e3-1507-44ae-9eb7-2247ab0e37bf") : configmap "swift-ring-files" not found Jan 31 15:12:59 crc kubenswrapper[4763]: I0131 15:12:59.626674 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk"] Jan 31 15:12:59 crc kubenswrapper[4763]: I0131 15:12:59.628056 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:12:59 crc kubenswrapper[4763]: I0131 15:12:59.635878 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-proxy-config-data" Jan 31 15:12:59 crc kubenswrapper[4763]: I0131 15:12:59.644461 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk"] Jan 31 15:12:59 crc kubenswrapper[4763]: I0131 15:12:59.768899 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:12:59 crc kubenswrapper[4763]: I0131 15:12:59.768962 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d952a9d1-9446-4003-b83c-9603f44fb634-run-httpd\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:12:59 crc kubenswrapper[4763]: I0131 15:12:59.769023 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d952a9d1-9446-4003-b83c-9603f44fb634-config-data\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:12:59 crc kubenswrapper[4763]: I0131 15:12:59.769520 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfjq2\" (UniqueName: \"kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-kube-api-access-gfjq2\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:12:59 crc kubenswrapper[4763]: I0131 15:12:59.769566 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d952a9d1-9446-4003-b83c-9603f44fb634-log-httpd\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:12:59 crc kubenswrapper[4763]: I0131 15:12:59.870825 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfjq2\" (UniqueName: \"kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-kube-api-access-gfjq2\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:12:59 crc kubenswrapper[4763]: I0131 15:12:59.870878 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d952a9d1-9446-4003-b83c-9603f44fb634-log-httpd\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:12:59 crc kubenswrapper[4763]: I0131 15:12:59.870908 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:12:59 crc kubenswrapper[4763]: I0131 15:12:59.870931 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d952a9d1-9446-4003-b83c-9603f44fb634-run-httpd\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:12:59 crc kubenswrapper[4763]: I0131 15:12:59.870972 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d952a9d1-9446-4003-b83c-9603f44fb634-config-data\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:12:59 crc kubenswrapper[4763]: E0131 15:12:59.871030 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:12:59 crc kubenswrapper[4763]: E0131 15:12:59.871062 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk: configmap "swift-ring-files" not found Jan 31 15:12:59 crc kubenswrapper[4763]: E0131 15:12:59.871107 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift podName:d952a9d1-9446-4003-b83c-9603f44fb634 nodeName:}" failed. No retries permitted until 2026-01-31 15:13:00.371091564 +0000 UTC m=+1100.125829857 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift") pod "swift-proxy-7d8cf99555-qp8rk" (UID: "d952a9d1-9446-4003-b83c-9603f44fb634") : configmap "swift-ring-files" not found Jan 31 15:12:59 crc kubenswrapper[4763]: I0131 15:12:59.871344 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d952a9d1-9446-4003-b83c-9603f44fb634-log-httpd\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:12:59 crc kubenswrapper[4763]: I0131 15:12:59.871465 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d952a9d1-9446-4003-b83c-9603f44fb634-run-httpd\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:12:59 crc kubenswrapper[4763]: I0131 15:12:59.875991 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d952a9d1-9446-4003-b83c-9603f44fb634-config-data\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:12:59 crc kubenswrapper[4763]: I0131 15:12:59.886953 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfjq2\" (UniqueName: \"kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-kube-api-access-gfjq2\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:13:00 crc kubenswrapper[4763]: I0131 15:13:00.073928 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:13:00 crc kubenswrapper[4763]: E0131 15:13:00.074138 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:13:00 crc kubenswrapper[4763]: E0131 15:13:00.074160 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:13:00 crc kubenswrapper[4763]: E0131 15:13:00.074222 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift podName:6c4f98e3-1507-44ae-9eb7-2247ab0e37bf nodeName:}" failed. No retries permitted until 2026-01-31 15:13:02.074205572 +0000 UTC m=+1101.828943875 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift") pod "swift-storage-0" (UID: "6c4f98e3-1507-44ae-9eb7-2247ab0e37bf") : configmap "swift-ring-files" not found Jan 31 15:13:00 crc kubenswrapper[4763]: I0131 15:13:00.378009 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:13:00 crc kubenswrapper[4763]: E0131 15:13:00.378187 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:13:00 crc kubenswrapper[4763]: E0131 15:13:00.378421 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk: configmap "swift-ring-files" not found Jan 31 15:13:00 crc kubenswrapper[4763]: E0131 15:13:00.378477 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift podName:d952a9d1-9446-4003-b83c-9603f44fb634 nodeName:}" failed. No retries permitted until 2026-01-31 15:13:01.378461875 +0000 UTC m=+1101.133200168 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift") pod "swift-proxy-7d8cf99555-qp8rk" (UID: "d952a9d1-9446-4003-b83c-9603f44fb634") : configmap "swift-ring-files" not found Jan 31 15:13:01 crc kubenswrapper[4763]: I0131 15:13:01.391633 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:13:01 crc kubenswrapper[4763]: E0131 15:13:01.391845 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:13:01 crc kubenswrapper[4763]: E0131 15:13:01.391881 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk: configmap "swift-ring-files" not found Jan 31 15:13:01 crc kubenswrapper[4763]: E0131 15:13:01.391981 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift podName:d952a9d1-9446-4003-b83c-9603f44fb634 nodeName:}" failed. No retries permitted until 2026-01-31 15:13:03.391951262 +0000 UTC m=+1103.146689585 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift") pod "swift-proxy-7d8cf99555-qp8rk" (UID: "d952a9d1-9446-4003-b83c-9603f44fb634") : configmap "swift-ring-files" not found Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.102101 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:13:02 crc kubenswrapper[4763]: E0131 15:13:02.102682 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:13:02 crc kubenswrapper[4763]: E0131 15:13:02.102910 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:13:02 crc kubenswrapper[4763]: E0131 15:13:02.103113 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift podName:6c4f98e3-1507-44ae-9eb7-2247ab0e37bf nodeName:}" failed. No retries permitted until 2026-01-31 15:13:06.103087487 +0000 UTC m=+1105.857825820 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift") pod "swift-storage-0" (UID: "6c4f98e3-1507-44ae-9eb7-2247ab0e37bf") : configmap "swift-ring-files" not found Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.326957 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-j758x"] Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.328364 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.331282 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.331729 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.349747 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-j758x"] Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.408046 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcvvs\" (UniqueName: \"kubernetes.io/projected/30af61f1-3271-4c8a-9da4-44fd302b135b-kube-api-access-wcvvs\") pod \"swift-ring-rebalance-j758x\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.408119 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30af61f1-3271-4c8a-9da4-44fd302b135b-scripts\") pod \"swift-ring-rebalance-j758x\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.408287 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/30af61f1-3271-4c8a-9da4-44fd302b135b-dispersionconf\") pod \"swift-ring-rebalance-j758x\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.408354 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/30af61f1-3271-4c8a-9da4-44fd302b135b-ring-data-devices\") pod \"swift-ring-rebalance-j758x\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.408383 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/30af61f1-3271-4c8a-9da4-44fd302b135b-swiftconf\") pod \"swift-ring-rebalance-j758x\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.408460 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/30af61f1-3271-4c8a-9da4-44fd302b135b-etc-swift\") pod \"swift-ring-rebalance-j758x\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.510198 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/30af61f1-3271-4c8a-9da4-44fd302b135b-dispersionconf\") pod \"swift-ring-rebalance-j758x\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.510559 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/30af61f1-3271-4c8a-9da4-44fd302b135b-ring-data-devices\") pod \"swift-ring-rebalance-j758x\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.510587 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/30af61f1-3271-4c8a-9da4-44fd302b135b-swiftconf\") pod \"swift-ring-rebalance-j758x\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.510646 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/30af61f1-3271-4c8a-9da4-44fd302b135b-etc-swift\") pod \"swift-ring-rebalance-j758x\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.510699 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcvvs\" (UniqueName: \"kubernetes.io/projected/30af61f1-3271-4c8a-9da4-44fd302b135b-kube-api-access-wcvvs\") pod \"swift-ring-rebalance-j758x\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.510738 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30af61f1-3271-4c8a-9da4-44fd302b135b-scripts\") pod \"swift-ring-rebalance-j758x\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.511552 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/30af61f1-3271-4c8a-9da4-44fd302b135b-etc-swift\") pod \"swift-ring-rebalance-j758x\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.511634 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/30af61f1-3271-4c8a-9da4-44fd302b135b-ring-data-devices\") pod \"swift-ring-rebalance-j758x\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.511688 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30af61f1-3271-4c8a-9da4-44fd302b135b-scripts\") pod \"swift-ring-rebalance-j758x\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.516652 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/30af61f1-3271-4c8a-9da4-44fd302b135b-dispersionconf\") pod \"swift-ring-rebalance-j758x\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.519386 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/30af61f1-3271-4c8a-9da4-44fd302b135b-swiftconf\") pod \"swift-ring-rebalance-j758x\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.542619 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcvvs\" (UniqueName: \"kubernetes.io/projected/30af61f1-3271-4c8a-9da4-44fd302b135b-kube-api-access-wcvvs\") pod \"swift-ring-rebalance-j758x\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.682210 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:03 crc kubenswrapper[4763]: I0131 15:13:03.124277 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-j758x"] Jan 31 15:13:03 crc kubenswrapper[4763]: W0131 15:13:03.125495 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30af61f1_3271_4c8a_9da4_44fd302b135b.slice/crio-fb609af9f52ea6872e3a8809efddfb4a4013cd859ce7ca0dadde9eb0b414e2df WatchSource:0}: Error finding container fb609af9f52ea6872e3a8809efddfb4a4013cd859ce7ca0dadde9eb0b414e2df: Status 404 returned error can't find the container with id fb609af9f52ea6872e3a8809efddfb4a4013cd859ce7ca0dadde9eb0b414e2df Jan 31 15:13:03 crc kubenswrapper[4763]: I0131 15:13:03.431821 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:13:03 crc kubenswrapper[4763]: E0131 15:13:03.431954 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:13:03 crc kubenswrapper[4763]: E0131 15:13:03.431980 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk: configmap "swift-ring-files" not found Jan 31 15:13:03 crc kubenswrapper[4763]: E0131 15:13:03.432042 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift podName:d952a9d1-9446-4003-b83c-9603f44fb634 nodeName:}" failed. No retries permitted until 2026-01-31 15:13:07.432022983 +0000 UTC m=+1107.186761286 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift") pod "swift-proxy-7d8cf99555-qp8rk" (UID: "d952a9d1-9446-4003-b83c-9603f44fb634") : configmap "swift-ring-files" not found Jan 31 15:13:03 crc kubenswrapper[4763]: I0131 15:13:03.754080 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-j758x" event={"ID":"30af61f1-3271-4c8a-9da4-44fd302b135b","Type":"ContainerStarted","Data":"fb609af9f52ea6872e3a8809efddfb4a4013cd859ce7ca0dadde9eb0b414e2df"} Jan 31 15:13:06 crc kubenswrapper[4763]: I0131 15:13:06.169186 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:13:06 crc kubenswrapper[4763]: E0131 15:13:06.169621 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:13:06 crc kubenswrapper[4763]: E0131 15:13:06.169660 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:13:06 crc kubenswrapper[4763]: E0131 15:13:06.169745 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift podName:6c4f98e3-1507-44ae-9eb7-2247ab0e37bf nodeName:}" failed. No retries permitted until 2026-01-31 15:13:14.169724421 +0000 UTC m=+1113.924462724 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift") pod "swift-storage-0" (UID: "6c4f98e3-1507-44ae-9eb7-2247ab0e37bf") : configmap "swift-ring-files" not found Jan 31 15:13:06 crc kubenswrapper[4763]: I0131 15:13:06.798200 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-j758x" event={"ID":"30af61f1-3271-4c8a-9da4-44fd302b135b","Type":"ContainerStarted","Data":"5c8011a9d18428d101c91127624cd31f0ce315158e322ff2f55edf51f5e08669"} Jan 31 15:13:06 crc kubenswrapper[4763]: I0131 15:13:06.829276 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-j758x" podStartSLOduration=1.668370099 podStartE2EDuration="4.829253703s" podCreationTimestamp="2026-01-31 15:13:02 +0000 UTC" firstStartedPulling="2026-01-31 15:13:03.127532924 +0000 UTC m=+1102.882271217" lastFinishedPulling="2026-01-31 15:13:06.288416508 +0000 UTC m=+1106.043154821" observedRunningTime="2026-01-31 15:13:06.823147442 +0000 UTC m=+1106.577885825" watchObservedRunningTime="2026-01-31 15:13:06.829253703 +0000 UTC m=+1106.583991996" Jan 31 15:13:07 crc kubenswrapper[4763]: I0131 15:13:07.489598 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:13:07 crc kubenswrapper[4763]: E0131 15:13:07.489850 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:13:07 crc kubenswrapper[4763]: E0131 15:13:07.489897 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk: configmap "swift-ring-files" not found Jan 31 15:13:07 crc kubenswrapper[4763]: E0131 15:13:07.489997 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift podName:d952a9d1-9446-4003-b83c-9603f44fb634 nodeName:}" failed. No retries permitted until 2026-01-31 15:13:15.489965936 +0000 UTC m=+1115.244704269 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift") pod "swift-proxy-7d8cf99555-qp8rk" (UID: "d952a9d1-9446-4003-b83c-9603f44fb634") : configmap "swift-ring-files" not found Jan 31 15:13:12 crc kubenswrapper[4763]: I0131 15:13:12.844167 4763 generic.go:334] "Generic (PLEG): container finished" podID="30af61f1-3271-4c8a-9da4-44fd302b135b" containerID="5c8011a9d18428d101c91127624cd31f0ce315158e322ff2f55edf51f5e08669" exitCode=0 Jan 31 15:13:12 crc kubenswrapper[4763]: I0131 15:13:12.844273 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-j758x" event={"ID":"30af61f1-3271-4c8a-9da4-44fd302b135b","Type":"ContainerDied","Data":"5c8011a9d18428d101c91127624cd31f0ce315158e322ff2f55edf51f5e08669"} Jan 31 15:13:14 crc kubenswrapper[4763]: I0131 15:13:14.177911 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:13:14 crc kubenswrapper[4763]: I0131 15:13:14.178041 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:13:14 crc kubenswrapper[4763]: I0131 15:13:14.253186 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:13:14 crc kubenswrapper[4763]: I0131 15:13:14.265564 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:13:14 crc kubenswrapper[4763]: I0131 15:13:14.464931 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:13:14 crc kubenswrapper[4763]: I0131 15:13:14.913785 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:14 crc kubenswrapper[4763]: I0131 15:13:14.957455 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.063068 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/30af61f1-3271-4c8a-9da4-44fd302b135b-etc-swift\") pod \"30af61f1-3271-4c8a-9da4-44fd302b135b\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.063410 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/30af61f1-3271-4c8a-9da4-44fd302b135b-dispersionconf\") pod \"30af61f1-3271-4c8a-9da4-44fd302b135b\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.063519 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcvvs\" (UniqueName: \"kubernetes.io/projected/30af61f1-3271-4c8a-9da4-44fd302b135b-kube-api-access-wcvvs\") pod \"30af61f1-3271-4c8a-9da4-44fd302b135b\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.063549 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30af61f1-3271-4c8a-9da4-44fd302b135b-scripts\") pod \"30af61f1-3271-4c8a-9da4-44fd302b135b\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.063589 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/30af61f1-3271-4c8a-9da4-44fd302b135b-swiftconf\") pod \"30af61f1-3271-4c8a-9da4-44fd302b135b\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.063658 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/30af61f1-3271-4c8a-9da4-44fd302b135b-ring-data-devices\") pod \"30af61f1-3271-4c8a-9da4-44fd302b135b\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.063907 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30af61f1-3271-4c8a-9da4-44fd302b135b-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "30af61f1-3271-4c8a-9da4-44fd302b135b" (UID: "30af61f1-3271-4c8a-9da4-44fd302b135b"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.064150 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/30af61f1-3271-4c8a-9da4-44fd302b135b-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.064315 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30af61f1-3271-4c8a-9da4-44fd302b135b-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "30af61f1-3271-4c8a-9da4-44fd302b135b" (UID: "30af61f1-3271-4c8a-9da4-44fd302b135b"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.069935 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30af61f1-3271-4c8a-9da4-44fd302b135b-kube-api-access-wcvvs" (OuterVolumeSpecName: "kube-api-access-wcvvs") pod "30af61f1-3271-4c8a-9da4-44fd302b135b" (UID: "30af61f1-3271-4c8a-9da4-44fd302b135b"). InnerVolumeSpecName "kube-api-access-wcvvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.083463 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30af61f1-3271-4c8a-9da4-44fd302b135b-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "30af61f1-3271-4c8a-9da4-44fd302b135b" (UID: "30af61f1-3271-4c8a-9da4-44fd302b135b"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.093553 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30af61f1-3271-4c8a-9da4-44fd302b135b-scripts" (OuterVolumeSpecName: "scripts") pod "30af61f1-3271-4c8a-9da4-44fd302b135b" (UID: "30af61f1-3271-4c8a-9da4-44fd302b135b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.098581 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30af61f1-3271-4c8a-9da4-44fd302b135b-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "30af61f1-3271-4c8a-9da4-44fd302b135b" (UID: "30af61f1-3271-4c8a-9da4-44fd302b135b"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.165546 4763 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/30af61f1-3271-4c8a-9da4-44fd302b135b-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.165586 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcvvs\" (UniqueName: \"kubernetes.io/projected/30af61f1-3271-4c8a-9da4-44fd302b135b-kube-api-access-wcvvs\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.165599 4763 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30af61f1-3271-4c8a-9da4-44fd302b135b-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.165609 4763 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/30af61f1-3271-4c8a-9da4-44fd302b135b-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.165617 4763 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/30af61f1-3271-4c8a-9da4-44fd302b135b-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.500755 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-j758x" event={"ID":"30af61f1-3271-4c8a-9da4-44fd302b135b","Type":"ContainerDied","Data":"fb609af9f52ea6872e3a8809efddfb4a4013cd859ce7ca0dadde9eb0b414e2df"} Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.500827 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb609af9f52ea6872e3a8809efddfb4a4013cd859ce7ca0dadde9eb0b414e2df" Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.500792 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.502507 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerStarted","Data":"fc755e76f946cbf270f23b7d66505db67f9f56fbc2a1281c6fec2037e341ce02"} Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.573566 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.579767 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.716640 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-j758x_30af61f1-3271-4c8a-9da4-44fd302b135b/swift-ring-rebalance/0.log" Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.847584 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:13:16 crc kubenswrapper[4763]: I0131 15:13:16.470499 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk"] Jan 31 15:13:16 crc kubenswrapper[4763]: W0131 15:13:16.503324 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd952a9d1_9446_4003_b83c_9603f44fb634.slice/crio-c5b9e8169bbdced4c4b198da2aa6a001331670d45c8a28e822df8ec98cfb315a WatchSource:0}: Error finding container c5b9e8169bbdced4c4b198da2aa6a001331670d45c8a28e822df8ec98cfb315a: Status 404 returned error can't find the container with id c5b9e8169bbdced4c4b198da2aa6a001331670d45c8a28e822df8ec98cfb315a Jan 31 15:13:16 crc kubenswrapper[4763]: I0131 15:13:16.518068 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerStarted","Data":"26d18a07f1ba356bb7df9ab96b968aba9c5d64937ec37a59fae70520b88066d5"} Jan 31 15:13:16 crc kubenswrapper[4763]: I0131 15:13:16.518425 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerStarted","Data":"e8f41f27d9646e9b7f047f60ad0964f6a5ebd721b98bf742430a0e9d49f019dd"} Jan 31 15:13:17 crc kubenswrapper[4763]: I0131 15:13:17.276826 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-j758x_30af61f1-3271-4c8a-9da4-44fd302b135b/swift-ring-rebalance/0.log" Jan 31 15:13:17 crc kubenswrapper[4763]: I0131 15:13:17.525845 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" event={"ID":"d952a9d1-9446-4003-b83c-9603f44fb634","Type":"ContainerStarted","Data":"604fd98d9f005c656c4e220b48cc698ee2b749f79101d5b649810f13620df6c7"} Jan 31 15:13:17 crc kubenswrapper[4763]: I0131 15:13:17.525885 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" event={"ID":"d952a9d1-9446-4003-b83c-9603f44fb634","Type":"ContainerStarted","Data":"6563b545667e949b8a568e3e8a66b612c2410e644e5db0a715da2ee5664c5271"} Jan 31 15:13:17 crc kubenswrapper[4763]: I0131 15:13:17.525895 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" event={"ID":"d952a9d1-9446-4003-b83c-9603f44fb634","Type":"ContainerStarted","Data":"c5b9e8169bbdced4c4b198da2aa6a001331670d45c8a28e822df8ec98cfb315a"} Jan 31 15:13:17 crc kubenswrapper[4763]: I0131 15:13:17.526924 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:13:17 crc kubenswrapper[4763]: I0131 15:13:17.526952 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:13:17 crc kubenswrapper[4763]: I0131 15:13:17.529950 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerStarted","Data":"bc321d01362d270731a7e972d44081e349a9ee7b3efcaf30f8babf36901ecf1e"} Jan 31 15:13:17 crc kubenswrapper[4763]: I0131 15:13:17.529970 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerStarted","Data":"8c3af34ffb6d8a6328c95ad20cd01fd80df1d4aa3cf85710910e19ed4a069343"} Jan 31 15:13:17 crc kubenswrapper[4763]: I0131 15:13:17.551610 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" podStartSLOduration=18.551592612 podStartE2EDuration="18.551592612s" podCreationTimestamp="2026-01-31 15:12:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:13:17.544638698 +0000 UTC m=+1117.299376991" watchObservedRunningTime="2026-01-31 15:13:17.551592612 +0000 UTC m=+1117.306330905" Jan 31 15:13:18 crc kubenswrapper[4763]: I0131 15:13:18.539778 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerStarted","Data":"f9c131e99fc59700893d4fbe2f7af6bf69bbf3f78f285398911d18a20c8ec6ea"} Jan 31 15:13:18 crc kubenswrapper[4763]: I0131 15:13:18.540119 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerStarted","Data":"79c3d3910a7c0f333be18e5912411834c0bc3275953b7a51d75d52ac387ddcaa"} Jan 31 15:13:18 crc kubenswrapper[4763]: I0131 15:13:18.540135 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerStarted","Data":"e6a2d028a5029018d31ffce2a967e9d0ddb51a1a19145baba94de8eb996b72f0"} Jan 31 15:13:18 crc kubenswrapper[4763]: I0131 15:13:18.836181 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-j758x_30af61f1-3271-4c8a-9da4-44fd302b135b/swift-ring-rebalance/0.log" Jan 31 15:13:19 crc kubenswrapper[4763]: I0131 15:13:19.557149 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerStarted","Data":"79013075c8cee2c124b6f705985ce81a282712a5c1945c3a1ae8d6d583285d31"} Jan 31 15:13:20 crc kubenswrapper[4763]: I0131 15:13:20.419107 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-j758x_30af61f1-3271-4c8a-9da4-44fd302b135b/swift-ring-rebalance/0.log" Jan 31 15:13:20 crc kubenswrapper[4763]: I0131 15:13:20.571224 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerStarted","Data":"758ecdf114748a274e883c4dfb0b65ebf29c160068e2fb6ba6a6ba341b7ad8cc"} Jan 31 15:13:20 crc kubenswrapper[4763]: I0131 15:13:20.572367 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerStarted","Data":"3e3a3d2095bc51e457a624c288843b1229f6edd0c9886caeecc8ae8c8fc0b366"} Jan 31 15:13:20 crc kubenswrapper[4763]: I0131 15:13:20.572521 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerStarted","Data":"9953487bd5a77e9cae7330ff25f691cf98544a779a3aaffa22289f618b2ab84e"} Jan 31 15:13:20 crc kubenswrapper[4763]: I0131 15:13:20.572636 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerStarted","Data":"f8537cd557c893811f4771b5d344fb7198348c938522cc5297a57e159c08bb1d"} Jan 31 15:13:20 crc kubenswrapper[4763]: I0131 15:13:20.572829 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerStarted","Data":"c90149cab3e82c3b3aa0479a8bfbfea30e029a20dbf49c2af33d575a0e5ad411"} Jan 31 15:13:21 crc kubenswrapper[4763]: I0131 15:13:21.602851 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerStarted","Data":"c92a67d175e7b2e0d2f4e6d03791a25078677c370b55cc124a64ac4580b1a186"} Jan 31 15:13:21 crc kubenswrapper[4763]: I0131 15:13:21.603318 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerStarted","Data":"c27b98be7532909d83c273cb34dc9fd9b883ddc36da5b9f8e235750c5110926b"} Jan 31 15:13:21 crc kubenswrapper[4763]: I0131 15:13:21.653918 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-0" podStartSLOduration=19.892535153 podStartE2EDuration="24.653898669s" podCreationTimestamp="2026-01-31 15:12:57 +0000 UTC" firstStartedPulling="2026-01-31 15:13:14.965847849 +0000 UTC m=+1114.720586142" lastFinishedPulling="2026-01-31 15:13:19.727211355 +0000 UTC m=+1119.481949658" observedRunningTime="2026-01-31 15:13:21.646414781 +0000 UTC m=+1121.401153084" watchObservedRunningTime="2026-01-31 15:13:21.653898669 +0000 UTC m=+1121.408636982" Jan 31 15:13:22 crc kubenswrapper[4763]: I0131 15:13:22.061263 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-j758x_30af61f1-3271-4c8a-9da4-44fd302b135b/swift-ring-rebalance/0.log" Jan 31 15:13:23 crc kubenswrapper[4763]: I0131 15:13:23.628169 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-j758x_30af61f1-3271-4c8a-9da4-44fd302b135b/swift-ring-rebalance/0.log" Jan 31 15:13:25 crc kubenswrapper[4763]: I0131 15:13:25.225056 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-j758x_30af61f1-3271-4c8a-9da4-44fd302b135b/swift-ring-rebalance/0.log" Jan 31 15:13:25 crc kubenswrapper[4763]: I0131 15:13:25.851977 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:13:25 crc kubenswrapper[4763]: I0131 15:13:25.853794 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:13:26 crc kubenswrapper[4763]: I0131 15:13:26.822534 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-j758x_30af61f1-3271-4c8a-9da4-44fd302b135b/swift-ring-rebalance/0.log" Jan 31 15:13:28 crc kubenswrapper[4763]: I0131 15:13:28.414670 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-j758x_30af61f1-3271-4c8a-9da4-44fd302b135b/swift-ring-rebalance/0.log" Jan 31 15:13:29 crc kubenswrapper[4763]: I0131 15:13:29.812014 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 31 15:13:29 crc kubenswrapper[4763]: E0131 15:13:29.814175 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30af61f1-3271-4c8a-9da4-44fd302b135b" containerName="swift-ring-rebalance" Jan 31 15:13:29 crc kubenswrapper[4763]: I0131 15:13:29.814405 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="30af61f1-3271-4c8a-9da4-44fd302b135b" containerName="swift-ring-rebalance" Jan 31 15:13:29 crc kubenswrapper[4763]: I0131 15:13:29.814982 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="30af61f1-3271-4c8a-9da4-44fd302b135b" containerName="swift-ring-rebalance" Jan 31 15:13:29 crc kubenswrapper[4763]: I0131 15:13:29.823484 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:13:29 crc kubenswrapper[4763]: I0131 15:13:29.838295 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 31 15:13:29 crc kubenswrapper[4763]: I0131 15:13:29.844452 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:13:29 crc kubenswrapper[4763]: I0131 15:13:29.861318 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 31 15:13:29 crc kubenswrapper[4763]: I0131 15:13:29.899981 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 31 15:13:29 crc kubenswrapper[4763]: I0131 15:13:29.937823 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-j758x"] Jan 31 15:13:29 crc kubenswrapper[4763]: I0131 15:13:29.948146 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-j758x"] Jan 31 15:13:29 crc kubenswrapper[4763]: I0131 15:13:29.958917 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-lfvg7"] Jan 31 15:13:29 crc kubenswrapper[4763]: I0131 15:13:29.959735 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:29 crc kubenswrapper[4763]: I0131 15:13:29.962123 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 15:13:29 crc kubenswrapper[4763]: I0131 15:13:29.962279 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 15:13:29 crc kubenswrapper[4763]: I0131 15:13:29.967850 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-lfvg7"] Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.007438 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-etc-swift\") pod \"swift-storage-1\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.007481 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0d83c163-ac13-4c3c-82cb-da30bdb664d4-etc-swift\") pod \"swift-storage-2\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.007499 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0d83c163-ac13-4c3c-82cb-da30bdb664d4-cache\") pod \"swift-storage-2\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.007528 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5twbl\" (UniqueName: \"kubernetes.io/projected/0d83c163-ac13-4c3c-82cb-da30bdb664d4-kube-api-access-5twbl\") pod \"swift-storage-2\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.007798 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-cache\") pod \"swift-storage-1\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.007851 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-1\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.007885 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-lock\") pod \"swift-storage-1\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.007982 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-2\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.008009 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjgnx\" (UniqueName: \"kubernetes.io/projected/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-kube-api-access-mjgnx\") pod \"swift-storage-1\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.008030 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0d83c163-ac13-4c3c-82cb-da30bdb664d4-lock\") pod \"swift-storage-2\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.109726 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5twbl\" (UniqueName: \"kubernetes.io/projected/0d83c163-ac13-4c3c-82cb-da30bdb664d4-kube-api-access-5twbl\") pod \"swift-storage-2\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.109792 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/92920ef2-27a4-47ea-b8f0-220dc84853e4-etc-swift\") pod \"swift-ring-rebalance-lfvg7\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.109837 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/92920ef2-27a4-47ea-b8f0-220dc84853e4-ring-data-devices\") pod \"swift-ring-rebalance-lfvg7\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.109868 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-cache\") pod \"swift-storage-1\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.109893 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-1\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.109914 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-lock\") pod \"swift-storage-1\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.109931 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/92920ef2-27a4-47ea-b8f0-220dc84853e4-swiftconf\") pod \"swift-ring-rebalance-lfvg7\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.109947 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/92920ef2-27a4-47ea-b8f0-220dc84853e4-dispersionconf\") pod \"swift-ring-rebalance-lfvg7\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.109979 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92920ef2-27a4-47ea-b8f0-220dc84853e4-scripts\") pod \"swift-ring-rebalance-lfvg7\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.110029 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjgnx\" (UniqueName: \"kubernetes.io/projected/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-kube-api-access-mjgnx\") pod \"swift-storage-1\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.110045 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-2\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.110064 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0d83c163-ac13-4c3c-82cb-da30bdb664d4-lock\") pod \"swift-storage-2\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.110159 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4855\" (UniqueName: \"kubernetes.io/projected/92920ef2-27a4-47ea-b8f0-220dc84853e4-kube-api-access-p4855\") pod \"swift-ring-rebalance-lfvg7\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.110229 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-etc-swift\") pod \"swift-storage-1\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.110247 4763 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-2\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") device mount path \"/mnt/openstack/pv02\"" pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.110263 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0d83c163-ac13-4c3c-82cb-da30bdb664d4-etc-swift\") pod \"swift-storage-2\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.110308 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0d83c163-ac13-4c3c-82cb-da30bdb664d4-cache\") pod \"swift-storage-2\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.110384 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-cache\") pod \"swift-storage-1\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.110414 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-lock\") pod \"swift-storage-1\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.110487 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0d83c163-ac13-4c3c-82cb-da30bdb664d4-lock\") pod \"swift-storage-2\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.110831 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0d83c163-ac13-4c3c-82cb-da30bdb664d4-cache\") pod \"swift-storage-2\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.111803 4763 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-1\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") device mount path \"/mnt/openstack/pv11\"" pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.120289 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-etc-swift\") pod \"swift-storage-1\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.121278 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0d83c163-ac13-4c3c-82cb-da30bdb664d4-etc-swift\") pod \"swift-storage-2\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.127249 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5twbl\" (UniqueName: \"kubernetes.io/projected/0d83c163-ac13-4c3c-82cb-da30bdb664d4-kube-api-access-5twbl\") pod \"swift-storage-2\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.134530 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjgnx\" (UniqueName: \"kubernetes.io/projected/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-kube-api-access-mjgnx\") pod \"swift-storage-1\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.145937 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-2\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.152170 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-1\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.170260 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.187916 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.212248 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/92920ef2-27a4-47ea-b8f0-220dc84853e4-etc-swift\") pod \"swift-ring-rebalance-lfvg7\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.212325 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/92920ef2-27a4-47ea-b8f0-220dc84853e4-ring-data-devices\") pod \"swift-ring-rebalance-lfvg7\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.212372 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/92920ef2-27a4-47ea-b8f0-220dc84853e4-swiftconf\") pod \"swift-ring-rebalance-lfvg7\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.212394 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/92920ef2-27a4-47ea-b8f0-220dc84853e4-dispersionconf\") pod \"swift-ring-rebalance-lfvg7\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.212432 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92920ef2-27a4-47ea-b8f0-220dc84853e4-scripts\") pod \"swift-ring-rebalance-lfvg7\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.212480 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4855\" (UniqueName: \"kubernetes.io/projected/92920ef2-27a4-47ea-b8f0-220dc84853e4-kube-api-access-p4855\") pod \"swift-ring-rebalance-lfvg7\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.212684 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/92920ef2-27a4-47ea-b8f0-220dc84853e4-etc-swift\") pod \"swift-ring-rebalance-lfvg7\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.213172 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/92920ef2-27a4-47ea-b8f0-220dc84853e4-ring-data-devices\") pod \"swift-ring-rebalance-lfvg7\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.213380 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92920ef2-27a4-47ea-b8f0-220dc84853e4-scripts\") pod \"swift-ring-rebalance-lfvg7\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.217624 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/92920ef2-27a4-47ea-b8f0-220dc84853e4-swiftconf\") pod \"swift-ring-rebalance-lfvg7\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.217897 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/92920ef2-27a4-47ea-b8f0-220dc84853e4-dispersionconf\") pod \"swift-ring-rebalance-lfvg7\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.240508 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4855\" (UniqueName: \"kubernetes.io/projected/92920ef2-27a4-47ea-b8f0-220dc84853e4-kube-api-access-p4855\") pod \"swift-ring-rebalance-lfvg7\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.280305 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.625342 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 31 15:13:30 crc kubenswrapper[4763]: W0131 15:13:30.629841 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fd7b0a2_4f68_44bc_8720_1dcb2d975beb.slice/crio-90e93209198d21ecb365141d71e0805661beff6af7f209b3c4513e9211db45f2 WatchSource:0}: Error finding container 90e93209198d21ecb365141d71e0805661beff6af7f209b3c4513e9211db45f2: Status 404 returned error can't find the container with id 90e93209198d21ecb365141d71e0805661beff6af7f209b3c4513e9211db45f2 Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.698302 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerStarted","Data":"90e93209198d21ecb365141d71e0805661beff6af7f209b3c4513e9211db45f2"} Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.710673 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 31 15:13:30 crc kubenswrapper[4763]: W0131 15:13:30.751943 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92920ef2_27a4_47ea_b8f0_220dc84853e4.slice/crio-bad397da8bdce7d609b2654986821fb7c56faedd4c3912746b4a431f20c68d29 WatchSource:0}: Error finding container bad397da8bdce7d609b2654986821fb7c56faedd4c3912746b4a431f20c68d29: Status 404 returned error can't find the container with id bad397da8bdce7d609b2654986821fb7c56faedd4c3912746b4a431f20c68d29 Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.758279 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-lfvg7"] Jan 31 15:13:31 crc kubenswrapper[4763]: I0131 15:13:31.060560 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30af61f1-3271-4c8a-9da4-44fd302b135b" path="/var/lib/kubelet/pods/30af61f1-3271-4c8a-9da4-44fd302b135b/volumes" Jan 31 15:13:31 crc kubenswrapper[4763]: I0131 15:13:31.708534 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerStarted","Data":"db9e135b4f60c10dc082344ade61ed089817f4f5bb60cf75e034bd8fb673be95"} Jan 31 15:13:31 crc kubenswrapper[4763]: I0131 15:13:31.708854 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerStarted","Data":"95874f605a14427c61a1fe2f12f189c9c62e0310731ed8b97d4aa745deb8470a"} Jan 31 15:13:31 crc kubenswrapper[4763]: I0131 15:13:31.708866 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerStarted","Data":"43761f858930e3361c93e26c546d75f9049fbeb0aed2f837e6aa6e57845c2002"} Jan 31 15:13:31 crc kubenswrapper[4763]: I0131 15:13:31.708874 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerStarted","Data":"9ec12f6ac4fb2fe1a0eaa6ae49e4dc13f98f460950b7abbd35ef0d14386f1613"} Jan 31 15:13:31 crc kubenswrapper[4763]: I0131 15:13:31.708883 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerStarted","Data":"2e0d604e834eeb3cec78cfb460add37121eda53b4eea073ea361954901be3261"} Jan 31 15:13:31 crc kubenswrapper[4763]: I0131 15:13:31.710583 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" event={"ID":"92920ef2-27a4-47ea-b8f0-220dc84853e4","Type":"ContainerStarted","Data":"6f58266364b48d61b8c86fc9324d5893f7c7c65dee6b93a4c08f45a44a010cb1"} Jan 31 15:13:31 crc kubenswrapper[4763]: I0131 15:13:31.710607 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" event={"ID":"92920ef2-27a4-47ea-b8f0-220dc84853e4","Type":"ContainerStarted","Data":"bad397da8bdce7d609b2654986821fb7c56faedd4c3912746b4a431f20c68d29"} Jan 31 15:13:31 crc kubenswrapper[4763]: I0131 15:13:31.713982 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerStarted","Data":"db2e802c7fb1380e7d58552715a2bbc320cf91d8405eb9d9f94c80688bdaa86e"} Jan 31 15:13:31 crc kubenswrapper[4763]: I0131 15:13:31.714116 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerStarted","Data":"b66057b270aafdfd00402f417700789d4eb2a09f6b95a3210ba44757a3173fb1"} Jan 31 15:13:31 crc kubenswrapper[4763]: I0131 15:13:31.714196 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerStarted","Data":"1162fdbabfecb0f41f85a7d666a0cc25c2eb72256d57c4d4b10c1e0aa3ad58c6"} Jan 31 15:13:31 crc kubenswrapper[4763]: I0131 15:13:31.714262 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerStarted","Data":"afa6a3f7b8649103325ad5315e9dc80fef7ee37028bfc085ed5215505185d150"} Jan 31 15:13:31 crc kubenswrapper[4763]: I0131 15:13:31.732034 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" podStartSLOduration=2.7320163109999998 podStartE2EDuration="2.732016311s" podCreationTimestamp="2026-01-31 15:13:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:13:31.724548244 +0000 UTC m=+1131.479286537" watchObservedRunningTime="2026-01-31 15:13:31.732016311 +0000 UTC m=+1131.486754614" Jan 31 15:13:32 crc kubenswrapper[4763]: I0131 15:13:32.784056 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerStarted","Data":"65af4eadceb3880bfd8c0ae7c91be9262b951343a51ea6f4b387d7805c18a203"} Jan 31 15:13:32 crc kubenswrapper[4763]: I0131 15:13:32.784106 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerStarted","Data":"62e05d6e34f51a24fb36847b0e659c0ded9040b7a9df03f170ac2888f520a2f7"} Jan 31 15:13:32 crc kubenswrapper[4763]: I0131 15:13:32.784114 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerStarted","Data":"c1724ae1efdd76da3683f41ca56bb7dec66988978a2f0f763be0b355c56b24d4"} Jan 31 15:13:32 crc kubenswrapper[4763]: I0131 15:13:32.806887 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerStarted","Data":"87fd27d178e3710cca6b6fe084307cf387592c330affd1b08797d73b27da0729"} Jan 31 15:13:32 crc kubenswrapper[4763]: I0131 15:13:32.806932 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerStarted","Data":"46dfcdcd0be805e9a45249b9185005e90a2633a0552782f7c25619de9a9e01d7"} Jan 31 15:13:32 crc kubenswrapper[4763]: I0131 15:13:32.806945 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerStarted","Data":"b145444ea56aafdcef5964b190965b77165b7812022b77df5b6a7fb787ea1f6b"} Jan 31 15:13:33 crc kubenswrapper[4763]: I0131 15:13:33.843148 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerStarted","Data":"bcd8336fa8422ff8f4710866669ead2747397c3d7e3479845eca695012773964"} Jan 31 15:13:33 crc kubenswrapper[4763]: I0131 15:13:33.843518 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerStarted","Data":"11ca22dc3aea18b7002ee209c33ea918f1a6a74de6a87b3e0282c7b043144034"} Jan 31 15:13:33 crc kubenswrapper[4763]: I0131 15:13:33.843534 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerStarted","Data":"53cc7d03295052ed760abf6e7bf461dbb5c915e085d4733f09f4b88dee41fce6"} Jan 31 15:13:33 crc kubenswrapper[4763]: I0131 15:13:33.843565 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerStarted","Data":"44221e09a6221477f521497096958975eb2d81c909e9df7abcc70e7a6affe606"} Jan 31 15:13:33 crc kubenswrapper[4763]: I0131 15:13:33.843582 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerStarted","Data":"900954e8a49f5e385b29971b0076eb09818fee16df406d5bda2ffa32153b8f2c"} Jan 31 15:13:33 crc kubenswrapper[4763]: I0131 15:13:33.843594 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerStarted","Data":"b347ad616d945afabba996ef7b64761509d97dec2de5dbce22818ebd309bb2fd"} Jan 31 15:13:33 crc kubenswrapper[4763]: I0131 15:13:33.863203 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerStarted","Data":"97469e71426efac22b111ee22106730f06552f0b2b6a901f3e3e8bc6c68198dc"} Jan 31 15:13:33 crc kubenswrapper[4763]: I0131 15:13:33.863281 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerStarted","Data":"02a531233c8eff48d67d9eac0b9ddc4fec244753f6b3bfb9b21057bdb7be4c7c"} Jan 31 15:13:33 crc kubenswrapper[4763]: I0131 15:13:33.863299 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerStarted","Data":"82d538c669bae5f7f95077ed345669166fe3742501df486afd6361232d21fcc5"} Jan 31 15:13:33 crc kubenswrapper[4763]: I0131 15:13:33.863341 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerStarted","Data":"e4b43e4f4b97a5362dd61572955b58929d58cd939580340917dd6856eee48c51"} Jan 31 15:13:33 crc kubenswrapper[4763]: I0131 15:13:33.863354 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerStarted","Data":"772b6f037f17f533f5cff02df3680cef890ca71af7aadc9080514f6990cfd4dc"} Jan 31 15:13:33 crc kubenswrapper[4763]: I0131 15:13:33.863365 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerStarted","Data":"f4ab4db4c5d9dcb6ccd57b84ea36065134a3bd7c3fff7268ce34ac3b6e1ba0ee"} Jan 31 15:13:34 crc kubenswrapper[4763]: I0131 15:13:34.879795 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerStarted","Data":"fc85ed3fbad7523183d7b9b76e6d1c6a4d902df91667936a1b1b3eb39ced033a"} Jan 31 15:13:34 crc kubenswrapper[4763]: I0131 15:13:34.879846 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerStarted","Data":"0faa735c202a6e16bf9b9e8948e5792bcc5154cd3f56ada820b906ae813092c6"} Jan 31 15:13:34 crc kubenswrapper[4763]: I0131 15:13:34.889869 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerStarted","Data":"2801ebc5b5111a3f354bc751c7f4df5a35e9e0f9f8c1b8f97dd7437b72bd0333"} Jan 31 15:13:34 crc kubenswrapper[4763]: I0131 15:13:34.889927 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerStarted","Data":"a0512f3a95ac09e16cf86ec9e8bfe2bd51334b12e32f11edcc9924d32d0317da"} Jan 31 15:13:34 crc kubenswrapper[4763]: I0131 15:13:34.934316 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-2" podStartSLOduration=6.934295859 podStartE2EDuration="6.934295859s" podCreationTimestamp="2026-01-31 15:13:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:13:34.929203834 +0000 UTC m=+1134.683942137" watchObservedRunningTime="2026-01-31 15:13:34.934295859 +0000 UTC m=+1134.689034152" Jan 31 15:13:34 crc kubenswrapper[4763]: I0131 15:13:34.984996 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-1" podStartSLOduration=6.984969588 podStartE2EDuration="6.984969588s" podCreationTimestamp="2026-01-31 15:13:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:13:34.978139888 +0000 UTC m=+1134.732878181" watchObservedRunningTime="2026-01-31 15:13:34.984969588 +0000 UTC m=+1134.739707901" Jan 31 15:13:39 crc kubenswrapper[4763]: I0131 15:13:39.940457 4763 generic.go:334] "Generic (PLEG): container finished" podID="92920ef2-27a4-47ea-b8f0-220dc84853e4" containerID="6f58266364b48d61b8c86fc9324d5893f7c7c65dee6b93a4c08f45a44a010cb1" exitCode=0 Jan 31 15:13:39 crc kubenswrapper[4763]: I0131 15:13:39.940556 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" event={"ID":"92920ef2-27a4-47ea-b8f0-220dc84853e4","Type":"ContainerDied","Data":"6f58266364b48d61b8c86fc9324d5893f7c7c65dee6b93a4c08f45a44a010cb1"} Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.241352 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.283876 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/92920ef2-27a4-47ea-b8f0-220dc84853e4-ring-data-devices\") pod \"92920ef2-27a4-47ea-b8f0-220dc84853e4\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.283953 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/92920ef2-27a4-47ea-b8f0-220dc84853e4-dispersionconf\") pod \"92920ef2-27a4-47ea-b8f0-220dc84853e4\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.284001 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/92920ef2-27a4-47ea-b8f0-220dc84853e4-swiftconf\") pod \"92920ef2-27a4-47ea-b8f0-220dc84853e4\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.284040 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/92920ef2-27a4-47ea-b8f0-220dc84853e4-etc-swift\") pod \"92920ef2-27a4-47ea-b8f0-220dc84853e4\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.284134 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92920ef2-27a4-47ea-b8f0-220dc84853e4-scripts\") pod \"92920ef2-27a4-47ea-b8f0-220dc84853e4\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.284166 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4855\" (UniqueName: \"kubernetes.io/projected/92920ef2-27a4-47ea-b8f0-220dc84853e4-kube-api-access-p4855\") pod \"92920ef2-27a4-47ea-b8f0-220dc84853e4\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.284739 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92920ef2-27a4-47ea-b8f0-220dc84853e4-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "92920ef2-27a4-47ea-b8f0-220dc84853e4" (UID: "92920ef2-27a4-47ea-b8f0-220dc84853e4"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.284918 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92920ef2-27a4-47ea-b8f0-220dc84853e4-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "92920ef2-27a4-47ea-b8f0-220dc84853e4" (UID: "92920ef2-27a4-47ea-b8f0-220dc84853e4"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.289291 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92920ef2-27a4-47ea-b8f0-220dc84853e4-kube-api-access-p4855" (OuterVolumeSpecName: "kube-api-access-p4855") pod "92920ef2-27a4-47ea-b8f0-220dc84853e4" (UID: "92920ef2-27a4-47ea-b8f0-220dc84853e4"). InnerVolumeSpecName "kube-api-access-p4855". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.305802 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92920ef2-27a4-47ea-b8f0-220dc84853e4-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "92920ef2-27a4-47ea-b8f0-220dc84853e4" (UID: "92920ef2-27a4-47ea-b8f0-220dc84853e4"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.306533 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92920ef2-27a4-47ea-b8f0-220dc84853e4-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "92920ef2-27a4-47ea-b8f0-220dc84853e4" (UID: "92920ef2-27a4-47ea-b8f0-220dc84853e4"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.320797 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92920ef2-27a4-47ea-b8f0-220dc84853e4-scripts" (OuterVolumeSpecName: "scripts") pod "92920ef2-27a4-47ea-b8f0-220dc84853e4" (UID: "92920ef2-27a4-47ea-b8f0-220dc84853e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.386302 4763 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92920ef2-27a4-47ea-b8f0-220dc84853e4-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.386371 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4855\" (UniqueName: \"kubernetes.io/projected/92920ef2-27a4-47ea-b8f0-220dc84853e4-kube-api-access-p4855\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.386394 4763 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/92920ef2-27a4-47ea-b8f0-220dc84853e4-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.386412 4763 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/92920ef2-27a4-47ea-b8f0-220dc84853e4-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.386646 4763 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/92920ef2-27a4-47ea-b8f0-220dc84853e4-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.386668 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/92920ef2-27a4-47ea-b8f0-220dc84853e4-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.962135 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" event={"ID":"92920ef2-27a4-47ea-b8f0-220dc84853e4","Type":"ContainerDied","Data":"bad397da8bdce7d609b2654986821fb7c56faedd4c3912746b4a431f20c68d29"} Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.962236 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bad397da8bdce7d609b2654986821fb7c56faedd4c3912746b4a431f20c68d29" Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.962265 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.297960 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv"] Jan 31 15:13:42 crc kubenswrapper[4763]: E0131 15:13:42.298325 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92920ef2-27a4-47ea-b8f0-220dc84853e4" containerName="swift-ring-rebalance" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.298340 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="92920ef2-27a4-47ea-b8f0-220dc84853e4" containerName="swift-ring-rebalance" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.298513 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="92920ef2-27a4-47ea-b8f0-220dc84853e4" containerName="swift-ring-rebalance" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.299076 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.302090 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.304122 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.310864 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv"] Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.401232 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8eed4c21-330f-4e87-ab2e-12aed0685331-swiftconf\") pod \"swift-ring-rebalance-debug-kpdsv\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.401278 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8eed4c21-330f-4e87-ab2e-12aed0685331-etc-swift\") pod \"swift-ring-rebalance-debug-kpdsv\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.401310 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8eed4c21-330f-4e87-ab2e-12aed0685331-ring-data-devices\") pod \"swift-ring-rebalance-debug-kpdsv\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.401342 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8eed4c21-330f-4e87-ab2e-12aed0685331-dispersionconf\") pod \"swift-ring-rebalance-debug-kpdsv\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.401371 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8eed4c21-330f-4e87-ab2e-12aed0685331-scripts\") pod \"swift-ring-rebalance-debug-kpdsv\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.401418 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b47dn\" (UniqueName: \"kubernetes.io/projected/8eed4c21-330f-4e87-ab2e-12aed0685331-kube-api-access-b47dn\") pod \"swift-ring-rebalance-debug-kpdsv\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.502334 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8eed4c21-330f-4e87-ab2e-12aed0685331-swiftconf\") pod \"swift-ring-rebalance-debug-kpdsv\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.502601 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8eed4c21-330f-4e87-ab2e-12aed0685331-etc-swift\") pod \"swift-ring-rebalance-debug-kpdsv\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.502638 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8eed4c21-330f-4e87-ab2e-12aed0685331-ring-data-devices\") pod \"swift-ring-rebalance-debug-kpdsv\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.502674 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8eed4c21-330f-4e87-ab2e-12aed0685331-dispersionconf\") pod \"swift-ring-rebalance-debug-kpdsv\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.502726 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8eed4c21-330f-4e87-ab2e-12aed0685331-scripts\") pod \"swift-ring-rebalance-debug-kpdsv\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.502774 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b47dn\" (UniqueName: \"kubernetes.io/projected/8eed4c21-330f-4e87-ab2e-12aed0685331-kube-api-access-b47dn\") pod \"swift-ring-rebalance-debug-kpdsv\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.503100 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8eed4c21-330f-4e87-ab2e-12aed0685331-etc-swift\") pod \"swift-ring-rebalance-debug-kpdsv\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.503320 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8eed4c21-330f-4e87-ab2e-12aed0685331-ring-data-devices\") pod \"swift-ring-rebalance-debug-kpdsv\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.503941 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8eed4c21-330f-4e87-ab2e-12aed0685331-scripts\") pod \"swift-ring-rebalance-debug-kpdsv\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.505731 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8eed4c21-330f-4e87-ab2e-12aed0685331-dispersionconf\") pod \"swift-ring-rebalance-debug-kpdsv\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.506407 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8eed4c21-330f-4e87-ab2e-12aed0685331-swiftconf\") pod \"swift-ring-rebalance-debug-kpdsv\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.521294 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b47dn\" (UniqueName: \"kubernetes.io/projected/8eed4c21-330f-4e87-ab2e-12aed0685331-kube-api-access-b47dn\") pod \"swift-ring-rebalance-debug-kpdsv\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.618526 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:43 crc kubenswrapper[4763]: I0131 15:13:43.080548 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv"] Jan 31 15:13:43 crc kubenswrapper[4763]: I0131 15:13:43.980254 4763 generic.go:334] "Generic (PLEG): container finished" podID="8eed4c21-330f-4e87-ab2e-12aed0685331" containerID="d277556494f9e3227af7edd989883de9cb25b67323d2c4aae46e1055b092bb37" exitCode=0 Jan 31 15:13:43 crc kubenswrapper[4763]: I0131 15:13:43.980320 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" event={"ID":"8eed4c21-330f-4e87-ab2e-12aed0685331","Type":"ContainerDied","Data":"d277556494f9e3227af7edd989883de9cb25b67323d2c4aae46e1055b092bb37"} Jan 31 15:13:43 crc kubenswrapper[4763]: I0131 15:13:43.980514 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" event={"ID":"8eed4c21-330f-4e87-ab2e-12aed0685331","Type":"ContainerStarted","Data":"f053bd1214d0bded5c5fac6cbf980f79b07eb3facacc192c4df673d6ea33cecf"} Jan 31 15:13:44 crc kubenswrapper[4763]: I0131 15:13:44.026541 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv"] Jan 31 15:13:44 crc kubenswrapper[4763]: I0131 15:13:44.039109 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv"] Jan 31 15:13:44 crc kubenswrapper[4763]: I0131 15:13:44.178485 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:13:44 crc kubenswrapper[4763]: I0131 15:13:44.178591 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:13:44 crc kubenswrapper[4763]: I0131 15:13:44.178673 4763 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 15:13:44 crc kubenswrapper[4763]: I0131 15:13:44.179778 4763 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f20a9db2f936557dee18355e3894646df84495361a6df22f393e9e76f8aebb8e"} pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 15:13:44 crc kubenswrapper[4763]: I0131 15:13:44.179907 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" containerID="cri-o://f20a9db2f936557dee18355e3894646df84495361a6df22f393e9e76f8aebb8e" gracePeriod=600 Jan 31 15:13:44 crc kubenswrapper[4763]: I0131 15:13:44.993211 4763 generic.go:334] "Generic (PLEG): container finished" podID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerID="f20a9db2f936557dee18355e3894646df84495361a6df22f393e9e76f8aebb8e" exitCode=0 Jan 31 15:13:44 crc kubenswrapper[4763]: I0131 15:13:44.993298 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerDied","Data":"f20a9db2f936557dee18355e3894646df84495361a6df22f393e9e76f8aebb8e"} Jan 31 15:13:44 crc kubenswrapper[4763]: I0131 15:13:44.993613 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerStarted","Data":"ed9e66445ed11fb3e31f897826819c87a5cfa2eaf20ae073d2a90c461528b554"} Jan 31 15:13:44 crc kubenswrapper[4763]: I0131 15:13:44.993645 4763 scope.go:117] "RemoveContainer" containerID="b85e19e54b5edf80a14f0fece0b4788a8867e6919fb49643606ef0f2d6f8bd3e" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.323152 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.453270 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8eed4c21-330f-4e87-ab2e-12aed0685331-scripts\") pod \"8eed4c21-330f-4e87-ab2e-12aed0685331\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.453342 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b47dn\" (UniqueName: \"kubernetes.io/projected/8eed4c21-330f-4e87-ab2e-12aed0685331-kube-api-access-b47dn\") pod \"8eed4c21-330f-4e87-ab2e-12aed0685331\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.453380 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8eed4c21-330f-4e87-ab2e-12aed0685331-dispersionconf\") pod \"8eed4c21-330f-4e87-ab2e-12aed0685331\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.453496 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8eed4c21-330f-4e87-ab2e-12aed0685331-etc-swift\") pod \"8eed4c21-330f-4e87-ab2e-12aed0685331\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.453610 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8eed4c21-330f-4e87-ab2e-12aed0685331-swiftconf\") pod \"8eed4c21-330f-4e87-ab2e-12aed0685331\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.453659 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8eed4c21-330f-4e87-ab2e-12aed0685331-ring-data-devices\") pod \"8eed4c21-330f-4e87-ab2e-12aed0685331\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.455167 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eed4c21-330f-4e87-ab2e-12aed0685331-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "8eed4c21-330f-4e87-ab2e-12aed0685331" (UID: "8eed4c21-330f-4e87-ab2e-12aed0685331"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.455970 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8eed4c21-330f-4e87-ab2e-12aed0685331-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "8eed4c21-330f-4e87-ab2e-12aed0685331" (UID: "8eed4c21-330f-4e87-ab2e-12aed0685331"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.456632 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6"] Jan 31 15:13:45 crc kubenswrapper[4763]: E0131 15:13:45.457041 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eed4c21-330f-4e87-ab2e-12aed0685331" containerName="swift-ring-rebalance" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.457054 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eed4c21-330f-4e87-ab2e-12aed0685331" containerName="swift-ring-rebalance" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.457225 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eed4c21-330f-4e87-ab2e-12aed0685331" containerName="swift-ring-rebalance" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.457642 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.460903 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eed4c21-330f-4e87-ab2e-12aed0685331-kube-api-access-b47dn" (OuterVolumeSpecName: "kube-api-access-b47dn") pod "8eed4c21-330f-4e87-ab2e-12aed0685331" (UID: "8eed4c21-330f-4e87-ab2e-12aed0685331"). InnerVolumeSpecName "kube-api-access-b47dn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.479677 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8eed4c21-330f-4e87-ab2e-12aed0685331-scripts" (OuterVolumeSpecName: "scripts") pod "8eed4c21-330f-4e87-ab2e-12aed0685331" (UID: "8eed4c21-330f-4e87-ab2e-12aed0685331"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.480506 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eed4c21-330f-4e87-ab2e-12aed0685331-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "8eed4c21-330f-4e87-ab2e-12aed0685331" (UID: "8eed4c21-330f-4e87-ab2e-12aed0685331"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.483587 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6"] Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.517838 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eed4c21-330f-4e87-ab2e-12aed0685331-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "8eed4c21-330f-4e87-ab2e-12aed0685331" (UID: "8eed4c21-330f-4e87-ab2e-12aed0685331"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.555635 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/754727d0-e275-4404-8805-af884fac0750-dispersionconf\") pod \"swift-ring-rebalance-debug-cvgd6\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.555711 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/754727d0-e275-4404-8805-af884fac0750-ring-data-devices\") pod \"swift-ring-rebalance-debug-cvgd6\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.555776 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/754727d0-e275-4404-8805-af884fac0750-swiftconf\") pod \"swift-ring-rebalance-debug-cvgd6\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.555809 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/754727d0-e275-4404-8805-af884fac0750-etc-swift\") pod \"swift-ring-rebalance-debug-cvgd6\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.555863 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfn26\" (UniqueName: \"kubernetes.io/projected/754727d0-e275-4404-8805-af884fac0750-kube-api-access-rfn26\") pod \"swift-ring-rebalance-debug-cvgd6\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.555929 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/754727d0-e275-4404-8805-af884fac0750-scripts\") pod \"swift-ring-rebalance-debug-cvgd6\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.555990 4763 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8eed4c21-330f-4e87-ab2e-12aed0685331-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.556006 4763 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8eed4c21-330f-4e87-ab2e-12aed0685331-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.556018 4763 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8eed4c21-330f-4e87-ab2e-12aed0685331-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.556029 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b47dn\" (UniqueName: \"kubernetes.io/projected/8eed4c21-330f-4e87-ab2e-12aed0685331-kube-api-access-b47dn\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.556042 4763 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8eed4c21-330f-4e87-ab2e-12aed0685331-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.556056 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8eed4c21-330f-4e87-ab2e-12aed0685331-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.657398 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/754727d0-e275-4404-8805-af884fac0750-swiftconf\") pod \"swift-ring-rebalance-debug-cvgd6\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.657493 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/754727d0-e275-4404-8805-af884fac0750-etc-swift\") pod \"swift-ring-rebalance-debug-cvgd6\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.657567 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfn26\" (UniqueName: \"kubernetes.io/projected/754727d0-e275-4404-8805-af884fac0750-kube-api-access-rfn26\") pod \"swift-ring-rebalance-debug-cvgd6\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.657642 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/754727d0-e275-4404-8805-af884fac0750-scripts\") pod \"swift-ring-rebalance-debug-cvgd6\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.657729 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/754727d0-e275-4404-8805-af884fac0750-dispersionconf\") pod \"swift-ring-rebalance-debug-cvgd6\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.657767 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/754727d0-e275-4404-8805-af884fac0750-ring-data-devices\") pod \"swift-ring-rebalance-debug-cvgd6\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.658144 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/754727d0-e275-4404-8805-af884fac0750-etc-swift\") pod \"swift-ring-rebalance-debug-cvgd6\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.658595 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/754727d0-e275-4404-8805-af884fac0750-scripts\") pod \"swift-ring-rebalance-debug-cvgd6\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.659059 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/754727d0-e275-4404-8805-af884fac0750-ring-data-devices\") pod \"swift-ring-rebalance-debug-cvgd6\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.664279 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/754727d0-e275-4404-8805-af884fac0750-dispersionconf\") pod \"swift-ring-rebalance-debug-cvgd6\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.664406 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/754727d0-e275-4404-8805-af884fac0750-swiftconf\") pod \"swift-ring-rebalance-debug-cvgd6\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.678255 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfn26\" (UniqueName: \"kubernetes.io/projected/754727d0-e275-4404-8805-af884fac0750-kube-api-access-rfn26\") pod \"swift-ring-rebalance-debug-cvgd6\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.872519 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:46 crc kubenswrapper[4763]: I0131 15:13:46.003895 4763 scope.go:117] "RemoveContainer" containerID="d277556494f9e3227af7edd989883de9cb25b67323d2c4aae46e1055b092bb37" Jan 31 15:13:46 crc kubenswrapper[4763]: I0131 15:13:46.003973 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:46 crc kubenswrapper[4763]: I0131 15:13:46.330826 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6"] Jan 31 15:13:46 crc kubenswrapper[4763]: E0131 15:13:46.955343 4763 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod754727d0_e275_4404_8805_af884fac0750.slice/crio-conmon-e321792cb928347235568231c246985837d0fc98a70e7f41e43b1fee17043a7c.scope\": RecentStats: unable to find data in memory cache]" Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.021676 4763 generic.go:334] "Generic (PLEG): container finished" podID="754727d0-e275-4404-8805-af884fac0750" containerID="e321792cb928347235568231c246985837d0fc98a70e7f41e43b1fee17043a7c" exitCode=0 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.021751 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" event={"ID":"754727d0-e275-4404-8805-af884fac0750","Type":"ContainerDied","Data":"e321792cb928347235568231c246985837d0fc98a70e7f41e43b1fee17043a7c"} Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.021799 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" event={"ID":"754727d0-e275-4404-8805-af884fac0750","Type":"ContainerStarted","Data":"20661d33d92f81fd7d6df3311e0f2533bfee4b0a999a9a78271c93e6e7e6e54a"} Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.063448 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eed4c21-330f-4e87-ab2e-12aed0685331" path="/var/lib/kubelet/pods/8eed4c21-330f-4e87-ab2e-12aed0685331/volumes" Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.079773 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6"] Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.083616 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6"] Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.189594 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.190163 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="account-server" containerID="cri-o://e8f41f27d9646e9b7f047f60ad0964f6a5ebd721b98bf742430a0e9d49f019dd" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.190202 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-server" containerID="cri-o://c90149cab3e82c3b3aa0479a8bfbfea30e029a20dbf49c2af33d575a0e5ad411" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.190306 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="container-updater" containerID="cri-o://79013075c8cee2c124b6f705985ce81a282712a5c1945c3a1ae8d6d583285d31" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.190359 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="container-auditor" containerID="cri-o://f9c131e99fc59700893d4fbe2f7af6bf69bbf3f78f285398911d18a20c8ec6ea" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.190425 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="account-reaper" containerID="cri-o://bc321d01362d270731a7e972d44081e349a9ee7b3efcaf30f8babf36901ecf1e" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.190412 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="account-auditor" containerID="cri-o://8c3af34ffb6d8a6328c95ad20cd01fd80df1d4aa3cf85710910e19ed4a069343" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.190472 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="rsync" containerID="cri-o://c27b98be7532909d83c273cb34dc9fd9b883ddc36da5b9f8e235750c5110926b" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.190526 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-updater" containerID="cri-o://3e3a3d2095bc51e457a624c288843b1229f6edd0c9886caeecc8ae8c8fc0b366" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.190425 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-expirer" containerID="cri-o://758ecdf114748a274e883c4dfb0b65ebf29c160068e2fb6ba6a6ba341b7ad8cc" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.190453 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="swift-recon-cron" containerID="cri-o://c92a67d175e7b2e0d2f4e6d03791a25078677c370b55cc124a64ac4580b1a186" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.190554 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="account-replicator" containerID="cri-o://26d18a07f1ba356bb7df9ab96b968aba9c5d64937ec37a59fae70520b88066d5" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.190615 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-auditor" containerID="cri-o://9953487bd5a77e9cae7330ff25f691cf98544a779a3aaffa22289f618b2ab84e" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.190631 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="container-replicator" containerID="cri-o://79c3d3910a7c0f333be18e5912411834c0bc3275953b7a51d75d52ac387ddcaa" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.190669 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-replicator" containerID="cri-o://f8537cd557c893811f4771b5d344fb7198348c938522cc5297a57e159c08bb1d" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.190677 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="container-server" containerID="cri-o://e6a2d028a5029018d31ffce2a967e9d0ddb51a1a19145baba94de8eb996b72f0" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.203254 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.204034 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="account-server" containerID="cri-o://afa6a3f7b8649103325ad5315e9dc80fef7ee37028bfc085ed5215505185d150" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.204170 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="swift-recon-cron" containerID="cri-o://a0512f3a95ac09e16cf86ec9e8bfe2bd51334b12e32f11edcc9924d32d0317da" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.204226 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="rsync" containerID="cri-o://2801ebc5b5111a3f354bc751c7f4df5a35e9e0f9f8c1b8f97dd7437b72bd0333" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.204264 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-expirer" containerID="cri-o://bcd8336fa8422ff8f4710866669ead2747397c3d7e3479845eca695012773964" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.204301 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-updater" containerID="cri-o://11ca22dc3aea18b7002ee209c33ea918f1a6a74de6a87b3e0282c7b043144034" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.204338 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-auditor" containerID="cri-o://53cc7d03295052ed760abf6e7bf461dbb5c915e085d4733f09f4b88dee41fce6" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.204375 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-replicator" containerID="cri-o://44221e09a6221477f521497096958975eb2d81c909e9df7abcc70e7a6affe606" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.204438 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-server" containerID="cri-o://900954e8a49f5e385b29971b0076eb09818fee16df406d5bda2ffa32153b8f2c" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.204479 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="container-updater" containerID="cri-o://b347ad616d945afabba996ef7b64761509d97dec2de5dbce22818ebd309bb2fd" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.204517 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="container-auditor" containerID="cri-o://65af4eadceb3880bfd8c0ae7c91be9262b951343a51ea6f4b387d7805c18a203" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.204553 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="container-replicator" containerID="cri-o://62e05d6e34f51a24fb36847b0e659c0ded9040b7a9df03f170ac2888f520a2f7" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.204573 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="account-auditor" containerID="cri-o://b66057b270aafdfd00402f417700789d4eb2a09f6b95a3210ba44757a3173fb1" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.204594 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="account-replicator" containerID="cri-o://1162fdbabfecb0f41f85a7d666a0cc25c2eb72256d57c4d4b10c1e0aa3ad58c6" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.204635 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="container-server" containerID="cri-o://c1724ae1efdd76da3683f41ca56bb7dec66988978a2f0f763be0b355c56b24d4" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.204684 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="account-reaper" containerID="cri-o://db2e802c7fb1380e7d58552715a2bbc320cf91d8405eb9d9f94c80688bdaa86e" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.219616 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.220218 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="account-server" containerID="cri-o://9ec12f6ac4fb2fe1a0eaa6ae49e4dc13f98f460950b7abbd35ef0d14386f1613" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.220602 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="swift-recon-cron" containerID="cri-o://0faa735c202a6e16bf9b9e8948e5792bcc5154cd3f56ada820b906ae813092c6" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.220663 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="rsync" containerID="cri-o://fc85ed3fbad7523183d7b9b76e6d1c6a4d902df91667936a1b1b3eb39ced033a" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.220734 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-expirer" containerID="cri-o://97469e71426efac22b111ee22106730f06552f0b2b6a901f3e3e8bc6c68198dc" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.220782 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-updater" containerID="cri-o://02a531233c8eff48d67d9eac0b9ddc4fec244753f6b3bfb9b21057bdb7be4c7c" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.220823 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-auditor" containerID="cri-o://82d538c669bae5f7f95077ed345669166fe3742501df486afd6361232d21fcc5" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.220862 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-replicator" containerID="cri-o://e4b43e4f4b97a5362dd61572955b58929d58cd939580340917dd6856eee48c51" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.220902 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-server" containerID="cri-o://772b6f037f17f533f5cff02df3680cef890ca71af7aadc9080514f6990cfd4dc" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.220942 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="container-updater" containerID="cri-o://f4ab4db4c5d9dcb6ccd57b84ea36065134a3bd7c3fff7268ce34ac3b6e1ba0ee" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.220979 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="container-auditor" containerID="cri-o://87fd27d178e3710cca6b6fe084307cf387592c330affd1b08797d73b27da0729" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.221020 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="container-replicator" containerID="cri-o://46dfcdcd0be805e9a45249b9185005e90a2633a0552782f7c25619de9a9e01d7" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.221060 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="container-server" containerID="cri-o://b145444ea56aafdcef5964b190965b77165b7812022b77df5b6a7fb787ea1f6b" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.221115 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="account-reaper" containerID="cri-o://db9e135b4f60c10dc082344ade61ed089817f4f5bb60cf75e034bd8fb673be95" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.221168 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="account-auditor" containerID="cri-o://95874f605a14427c61a1fe2f12f189c9c62e0310731ed8b97d4aa745deb8470a" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.221219 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="account-replicator" containerID="cri-o://43761f858930e3361c93e26c546d75f9049fbeb0aed2f837e6aa6e57845c2002" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.231026 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-lfvg7"] Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.236301 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-lfvg7"] Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.283512 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk"] Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.284033 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" podUID="d952a9d1-9446-4003-b83c-9603f44fb634" containerName="proxy-httpd" containerID="cri-o://6563b545667e949b8a568e3e8a66b612c2410e644e5db0a715da2ee5664c5271" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.284235 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" podUID="d952a9d1-9446-4003-b83c-9603f44fb634" containerName="proxy-server" containerID="cri-o://604fd98d9f005c656c4e220b48cc698ee2b749f79101d5b649810f13620df6c7" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.957252 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.013294 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfjq2\" (UniqueName: \"kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-kube-api-access-gfjq2\") pod \"d952a9d1-9446-4003-b83c-9603f44fb634\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.013372 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d952a9d1-9446-4003-b83c-9603f44fb634-config-data\") pod \"d952a9d1-9446-4003-b83c-9603f44fb634\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.013400 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d952a9d1-9446-4003-b83c-9603f44fb634-run-httpd\") pod \"d952a9d1-9446-4003-b83c-9603f44fb634\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.013460 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d952a9d1-9446-4003-b83c-9603f44fb634-log-httpd\") pod \"d952a9d1-9446-4003-b83c-9603f44fb634\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.013486 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift\") pod \"d952a9d1-9446-4003-b83c-9603f44fb634\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.014236 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d952a9d1-9446-4003-b83c-9603f44fb634-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d952a9d1-9446-4003-b83c-9603f44fb634" (UID: "d952a9d1-9446-4003-b83c-9603f44fb634"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.014306 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d952a9d1-9446-4003-b83c-9603f44fb634-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d952a9d1-9446-4003-b83c-9603f44fb634" (UID: "d952a9d1-9446-4003-b83c-9603f44fb634"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.022489 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-kube-api-access-gfjq2" (OuterVolumeSpecName: "kube-api-access-gfjq2") pod "d952a9d1-9446-4003-b83c-9603f44fb634" (UID: "d952a9d1-9446-4003-b83c-9603f44fb634"). InnerVolumeSpecName "kube-api-access-gfjq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.025683 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "d952a9d1-9446-4003-b83c-9603f44fb634" (UID: "d952a9d1-9446-4003-b83c-9603f44fb634"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049564 4763 generic.go:334] "Generic (PLEG): container finished" podID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerID="2801ebc5b5111a3f354bc751c7f4df5a35e9e0f9f8c1b8f97dd7437b72bd0333" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049595 4763 generic.go:334] "Generic (PLEG): container finished" podID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerID="bcd8336fa8422ff8f4710866669ead2747397c3d7e3479845eca695012773964" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049603 4763 generic.go:334] "Generic (PLEG): container finished" podID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerID="11ca22dc3aea18b7002ee209c33ea918f1a6a74de6a87b3e0282c7b043144034" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049610 4763 generic.go:334] "Generic (PLEG): container finished" podID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerID="53cc7d03295052ed760abf6e7bf461dbb5c915e085d4733f09f4b88dee41fce6" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049618 4763 generic.go:334] "Generic (PLEG): container finished" podID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerID="44221e09a6221477f521497096958975eb2d81c909e9df7abcc70e7a6affe606" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049625 4763 generic.go:334] "Generic (PLEG): container finished" podID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerID="900954e8a49f5e385b29971b0076eb09818fee16df406d5bda2ffa32153b8f2c" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049632 4763 generic.go:334] "Generic (PLEG): container finished" podID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerID="b347ad616d945afabba996ef7b64761509d97dec2de5dbce22818ebd309bb2fd" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049638 4763 generic.go:334] "Generic (PLEG): container finished" podID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerID="65af4eadceb3880bfd8c0ae7c91be9262b951343a51ea6f4b387d7805c18a203" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049644 4763 generic.go:334] "Generic (PLEG): container finished" podID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerID="62e05d6e34f51a24fb36847b0e659c0ded9040b7a9df03f170ac2888f520a2f7" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049651 4763 generic.go:334] "Generic (PLEG): container finished" podID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerID="c1724ae1efdd76da3683f41ca56bb7dec66988978a2f0f763be0b355c56b24d4" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049657 4763 generic.go:334] "Generic (PLEG): container finished" podID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerID="db2e802c7fb1380e7d58552715a2bbc320cf91d8405eb9d9f94c80688bdaa86e" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049664 4763 generic.go:334] "Generic (PLEG): container finished" podID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerID="b66057b270aafdfd00402f417700789d4eb2a09f6b95a3210ba44757a3173fb1" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049671 4763 generic.go:334] "Generic (PLEG): container finished" podID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerID="1162fdbabfecb0f41f85a7d666a0cc25c2eb72256d57c4d4b10c1e0aa3ad58c6" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049677 4763 generic.go:334] "Generic (PLEG): container finished" podID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerID="afa6a3f7b8649103325ad5315e9dc80fef7ee37028bfc085ed5215505185d150" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049729 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerDied","Data":"2801ebc5b5111a3f354bc751c7f4df5a35e9e0f9f8c1b8f97dd7437b72bd0333"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049753 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerDied","Data":"bcd8336fa8422ff8f4710866669ead2747397c3d7e3479845eca695012773964"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049766 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerDied","Data":"11ca22dc3aea18b7002ee209c33ea918f1a6a74de6a87b3e0282c7b043144034"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049777 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerDied","Data":"53cc7d03295052ed760abf6e7bf461dbb5c915e085d4733f09f4b88dee41fce6"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049789 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerDied","Data":"44221e09a6221477f521497096958975eb2d81c909e9df7abcc70e7a6affe606"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049802 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerDied","Data":"900954e8a49f5e385b29971b0076eb09818fee16df406d5bda2ffa32153b8f2c"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049812 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerDied","Data":"b347ad616d945afabba996ef7b64761509d97dec2de5dbce22818ebd309bb2fd"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049825 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerDied","Data":"65af4eadceb3880bfd8c0ae7c91be9262b951343a51ea6f4b387d7805c18a203"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049835 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerDied","Data":"62e05d6e34f51a24fb36847b0e659c0ded9040b7a9df03f170ac2888f520a2f7"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049846 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerDied","Data":"c1724ae1efdd76da3683f41ca56bb7dec66988978a2f0f763be0b355c56b24d4"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049858 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerDied","Data":"db2e802c7fb1380e7d58552715a2bbc320cf91d8405eb9d9f94c80688bdaa86e"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049866 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerDied","Data":"b66057b270aafdfd00402f417700789d4eb2a09f6b95a3210ba44757a3173fb1"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049875 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerDied","Data":"1162fdbabfecb0f41f85a7d666a0cc25c2eb72256d57c4d4b10c1e0aa3ad58c6"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049884 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerDied","Data":"afa6a3f7b8649103325ad5315e9dc80fef7ee37028bfc085ed5215505185d150"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.051329 4763 generic.go:334] "Generic (PLEG): container finished" podID="d952a9d1-9446-4003-b83c-9603f44fb634" containerID="604fd98d9f005c656c4e220b48cc698ee2b749f79101d5b649810f13620df6c7" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.051346 4763 generic.go:334] "Generic (PLEG): container finished" podID="d952a9d1-9446-4003-b83c-9603f44fb634" containerID="6563b545667e949b8a568e3e8a66b612c2410e644e5db0a715da2ee5664c5271" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.051372 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" event={"ID":"d952a9d1-9446-4003-b83c-9603f44fb634","Type":"ContainerDied","Data":"604fd98d9f005c656c4e220b48cc698ee2b749f79101d5b649810f13620df6c7"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.051386 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" event={"ID":"d952a9d1-9446-4003-b83c-9603f44fb634","Type":"ContainerDied","Data":"6563b545667e949b8a568e3e8a66b612c2410e644e5db0a715da2ee5664c5271"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.051395 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" event={"ID":"d952a9d1-9446-4003-b83c-9603f44fb634","Type":"ContainerDied","Data":"c5b9e8169bbdced4c4b198da2aa6a001331670d45c8a28e822df8ec98cfb315a"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.051409 4763 scope.go:117] "RemoveContainer" containerID="604fd98d9f005c656c4e220b48cc698ee2b749f79101d5b649810f13620df6c7" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.051514 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.051744 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d952a9d1-9446-4003-b83c-9603f44fb634-config-data" (OuterVolumeSpecName: "config-data") pod "d952a9d1-9446-4003-b83c-9603f44fb634" (UID: "d952a9d1-9446-4003-b83c-9603f44fb634"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057053 4763 generic.go:334] "Generic (PLEG): container finished" podID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerID="fc85ed3fbad7523183d7b9b76e6d1c6a4d902df91667936a1b1b3eb39ced033a" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057081 4763 generic.go:334] "Generic (PLEG): container finished" podID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerID="97469e71426efac22b111ee22106730f06552f0b2b6a901f3e3e8bc6c68198dc" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057088 4763 generic.go:334] "Generic (PLEG): container finished" podID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerID="02a531233c8eff48d67d9eac0b9ddc4fec244753f6b3bfb9b21057bdb7be4c7c" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057095 4763 generic.go:334] "Generic (PLEG): container finished" podID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerID="82d538c669bae5f7f95077ed345669166fe3742501df486afd6361232d21fcc5" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057102 4763 generic.go:334] "Generic (PLEG): container finished" podID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerID="e4b43e4f4b97a5362dd61572955b58929d58cd939580340917dd6856eee48c51" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057108 4763 generic.go:334] "Generic (PLEG): container finished" podID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerID="772b6f037f17f533f5cff02df3680cef890ca71af7aadc9080514f6990cfd4dc" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057114 4763 generic.go:334] "Generic (PLEG): container finished" podID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerID="f4ab4db4c5d9dcb6ccd57b84ea36065134a3bd7c3fff7268ce34ac3b6e1ba0ee" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057120 4763 generic.go:334] "Generic (PLEG): container finished" podID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerID="87fd27d178e3710cca6b6fe084307cf387592c330affd1b08797d73b27da0729" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057126 4763 generic.go:334] "Generic (PLEG): container finished" podID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerID="46dfcdcd0be805e9a45249b9185005e90a2633a0552782f7c25619de9a9e01d7" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057132 4763 generic.go:334] "Generic (PLEG): container finished" podID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerID="b145444ea56aafdcef5964b190965b77165b7812022b77df5b6a7fb787ea1f6b" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057138 4763 generic.go:334] "Generic (PLEG): container finished" podID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerID="db9e135b4f60c10dc082344ade61ed089817f4f5bb60cf75e034bd8fb673be95" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057145 4763 generic.go:334] "Generic (PLEG): container finished" podID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerID="95874f605a14427c61a1fe2f12f189c9c62e0310731ed8b97d4aa745deb8470a" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057151 4763 generic.go:334] "Generic (PLEG): container finished" podID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerID="43761f858930e3361c93e26c546d75f9049fbeb0aed2f837e6aa6e57845c2002" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057157 4763 generic.go:334] "Generic (PLEG): container finished" podID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerID="9ec12f6ac4fb2fe1a0eaa6ae49e4dc13f98f460950b7abbd35ef0d14386f1613" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057201 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerDied","Data":"fc85ed3fbad7523183d7b9b76e6d1c6a4d902df91667936a1b1b3eb39ced033a"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057228 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerDied","Data":"97469e71426efac22b111ee22106730f06552f0b2b6a901f3e3e8bc6c68198dc"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057241 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerDied","Data":"02a531233c8eff48d67d9eac0b9ddc4fec244753f6b3bfb9b21057bdb7be4c7c"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057251 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerDied","Data":"82d538c669bae5f7f95077ed345669166fe3742501df486afd6361232d21fcc5"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057262 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerDied","Data":"e4b43e4f4b97a5362dd61572955b58929d58cd939580340917dd6856eee48c51"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057273 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerDied","Data":"772b6f037f17f533f5cff02df3680cef890ca71af7aadc9080514f6990cfd4dc"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057281 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerDied","Data":"f4ab4db4c5d9dcb6ccd57b84ea36065134a3bd7c3fff7268ce34ac3b6e1ba0ee"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057289 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerDied","Data":"87fd27d178e3710cca6b6fe084307cf387592c330affd1b08797d73b27da0729"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057300 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerDied","Data":"46dfcdcd0be805e9a45249b9185005e90a2633a0552782f7c25619de9a9e01d7"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057308 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerDied","Data":"b145444ea56aafdcef5964b190965b77165b7812022b77df5b6a7fb787ea1f6b"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057315 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerDied","Data":"db9e135b4f60c10dc082344ade61ed089817f4f5bb60cf75e034bd8fb673be95"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057323 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerDied","Data":"95874f605a14427c61a1fe2f12f189c9c62e0310731ed8b97d4aa745deb8470a"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057331 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerDied","Data":"43761f858930e3361c93e26c546d75f9049fbeb0aed2f837e6aa6e57845c2002"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057339 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerDied","Data":"9ec12f6ac4fb2fe1a0eaa6ae49e4dc13f98f460950b7abbd35ef0d14386f1613"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070492 4763 generic.go:334] "Generic (PLEG): container finished" podID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerID="c27b98be7532909d83c273cb34dc9fd9b883ddc36da5b9f8e235750c5110926b" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070532 4763 generic.go:334] "Generic (PLEG): container finished" podID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerID="758ecdf114748a274e883c4dfb0b65ebf29c160068e2fb6ba6a6ba341b7ad8cc" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070539 4763 generic.go:334] "Generic (PLEG): container finished" podID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerID="3e3a3d2095bc51e457a624c288843b1229f6edd0c9886caeecc8ae8c8fc0b366" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070548 4763 generic.go:334] "Generic (PLEG): container finished" podID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerID="9953487bd5a77e9cae7330ff25f691cf98544a779a3aaffa22289f618b2ab84e" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070555 4763 generic.go:334] "Generic (PLEG): container finished" podID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerID="f8537cd557c893811f4771b5d344fb7198348c938522cc5297a57e159c08bb1d" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070564 4763 generic.go:334] "Generic (PLEG): container finished" podID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerID="c90149cab3e82c3b3aa0479a8bfbfea30e029a20dbf49c2af33d575a0e5ad411" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070570 4763 generic.go:334] "Generic (PLEG): container finished" podID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerID="79013075c8cee2c124b6f705985ce81a282712a5c1945c3a1ae8d6d583285d31" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070578 4763 generic.go:334] "Generic (PLEG): container finished" podID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerID="f9c131e99fc59700893d4fbe2f7af6bf69bbf3f78f285398911d18a20c8ec6ea" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070584 4763 generic.go:334] "Generic (PLEG): container finished" podID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerID="79c3d3910a7c0f333be18e5912411834c0bc3275953b7a51d75d52ac387ddcaa" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070590 4763 generic.go:334] "Generic (PLEG): container finished" podID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerID="e6a2d028a5029018d31ffce2a967e9d0ddb51a1a19145baba94de8eb996b72f0" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070597 4763 generic.go:334] "Generic (PLEG): container finished" podID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerID="bc321d01362d270731a7e972d44081e349a9ee7b3efcaf30f8babf36901ecf1e" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070604 4763 generic.go:334] "Generic (PLEG): container finished" podID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerID="8c3af34ffb6d8a6328c95ad20cd01fd80df1d4aa3cf85710910e19ed4a069343" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070610 4763 generic.go:334] "Generic (PLEG): container finished" podID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerID="26d18a07f1ba356bb7df9ab96b968aba9c5d64937ec37a59fae70520b88066d5" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070617 4763 generic.go:334] "Generic (PLEG): container finished" podID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerID="e8f41f27d9646e9b7f047f60ad0964f6a5ebd721b98bf742430a0e9d49f019dd" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070580 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerDied","Data":"c27b98be7532909d83c273cb34dc9fd9b883ddc36da5b9f8e235750c5110926b"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070662 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerDied","Data":"758ecdf114748a274e883c4dfb0b65ebf29c160068e2fb6ba6a6ba341b7ad8cc"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070675 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerDied","Data":"3e3a3d2095bc51e457a624c288843b1229f6edd0c9886caeecc8ae8c8fc0b366"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070685 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerDied","Data":"9953487bd5a77e9cae7330ff25f691cf98544a779a3aaffa22289f618b2ab84e"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070717 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerDied","Data":"f8537cd557c893811f4771b5d344fb7198348c938522cc5297a57e159c08bb1d"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070727 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerDied","Data":"c90149cab3e82c3b3aa0479a8bfbfea30e029a20dbf49c2af33d575a0e5ad411"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070739 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerDied","Data":"79013075c8cee2c124b6f705985ce81a282712a5c1945c3a1ae8d6d583285d31"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070747 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerDied","Data":"f9c131e99fc59700893d4fbe2f7af6bf69bbf3f78f285398911d18a20c8ec6ea"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070756 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerDied","Data":"79c3d3910a7c0f333be18e5912411834c0bc3275953b7a51d75d52ac387ddcaa"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070764 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerDied","Data":"e6a2d028a5029018d31ffce2a967e9d0ddb51a1a19145baba94de8eb996b72f0"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070772 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerDied","Data":"bc321d01362d270731a7e972d44081e349a9ee7b3efcaf30f8babf36901ecf1e"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070802 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerDied","Data":"8c3af34ffb6d8a6328c95ad20cd01fd80df1d4aa3cf85710910e19ed4a069343"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070811 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerDied","Data":"26d18a07f1ba356bb7df9ab96b968aba9c5d64937ec37a59fae70520b88066d5"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070818 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerDied","Data":"e8f41f27d9646e9b7f047f60ad0964f6a5ebd721b98bf742430a0e9d49f019dd"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.079187 4763 scope.go:117] "RemoveContainer" containerID="6563b545667e949b8a568e3e8a66b612c2410e644e5db0a715da2ee5664c5271" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.102511 4763 scope.go:117] "RemoveContainer" containerID="604fd98d9f005c656c4e220b48cc698ee2b749f79101d5b649810f13620df6c7" Jan 31 15:13:48 crc kubenswrapper[4763]: E0131 15:13:48.103050 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"604fd98d9f005c656c4e220b48cc698ee2b749f79101d5b649810f13620df6c7\": container with ID starting with 604fd98d9f005c656c4e220b48cc698ee2b749f79101d5b649810f13620df6c7 not found: ID does not exist" containerID="604fd98d9f005c656c4e220b48cc698ee2b749f79101d5b649810f13620df6c7" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.103102 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"604fd98d9f005c656c4e220b48cc698ee2b749f79101d5b649810f13620df6c7"} err="failed to get container status \"604fd98d9f005c656c4e220b48cc698ee2b749f79101d5b649810f13620df6c7\": rpc error: code = NotFound desc = could not find container \"604fd98d9f005c656c4e220b48cc698ee2b749f79101d5b649810f13620df6c7\": container with ID starting with 604fd98d9f005c656c4e220b48cc698ee2b749f79101d5b649810f13620df6c7 not found: ID does not exist" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.103129 4763 scope.go:117] "RemoveContainer" containerID="6563b545667e949b8a568e3e8a66b612c2410e644e5db0a715da2ee5664c5271" Jan 31 15:13:48 crc kubenswrapper[4763]: E0131 15:13:48.103658 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6563b545667e949b8a568e3e8a66b612c2410e644e5db0a715da2ee5664c5271\": container with ID starting with 6563b545667e949b8a568e3e8a66b612c2410e644e5db0a715da2ee5664c5271 not found: ID does not exist" containerID="6563b545667e949b8a568e3e8a66b612c2410e644e5db0a715da2ee5664c5271" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.103688 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6563b545667e949b8a568e3e8a66b612c2410e644e5db0a715da2ee5664c5271"} err="failed to get container status \"6563b545667e949b8a568e3e8a66b612c2410e644e5db0a715da2ee5664c5271\": rpc error: code = NotFound desc = could not find container \"6563b545667e949b8a568e3e8a66b612c2410e644e5db0a715da2ee5664c5271\": container with ID starting with 6563b545667e949b8a568e3e8a66b612c2410e644e5db0a715da2ee5664c5271 not found: ID does not exist" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.103721 4763 scope.go:117] "RemoveContainer" containerID="604fd98d9f005c656c4e220b48cc698ee2b749f79101d5b649810f13620df6c7" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.104022 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"604fd98d9f005c656c4e220b48cc698ee2b749f79101d5b649810f13620df6c7"} err="failed to get container status \"604fd98d9f005c656c4e220b48cc698ee2b749f79101d5b649810f13620df6c7\": rpc error: code = NotFound desc = could not find container \"604fd98d9f005c656c4e220b48cc698ee2b749f79101d5b649810f13620df6c7\": container with ID starting with 604fd98d9f005c656c4e220b48cc698ee2b749f79101d5b649810f13620df6c7 not found: ID does not exist" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.104060 4763 scope.go:117] "RemoveContainer" containerID="6563b545667e949b8a568e3e8a66b612c2410e644e5db0a715da2ee5664c5271" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.104397 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6563b545667e949b8a568e3e8a66b612c2410e644e5db0a715da2ee5664c5271"} err="failed to get container status \"6563b545667e949b8a568e3e8a66b612c2410e644e5db0a715da2ee5664c5271\": rpc error: code = NotFound desc = could not find container \"6563b545667e949b8a568e3e8a66b612c2410e644e5db0a715da2ee5664c5271\": container with ID starting with 6563b545667e949b8a568e3e8a66b612c2410e644e5db0a715da2ee5664c5271 not found: ID does not exist" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.115448 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfjq2\" (UniqueName: \"kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-kube-api-access-gfjq2\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.115480 4763 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d952a9d1-9446-4003-b83c-9603f44fb634-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.115491 4763 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d952a9d1-9446-4003-b83c-9603f44fb634-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.115500 4763 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d952a9d1-9446-4003-b83c-9603f44fb634-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.115507 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.242758 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.317915 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/754727d0-e275-4404-8805-af884fac0750-ring-data-devices\") pod \"754727d0-e275-4404-8805-af884fac0750\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.317973 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/754727d0-e275-4404-8805-af884fac0750-swiftconf\") pod \"754727d0-e275-4404-8805-af884fac0750\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.318001 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/754727d0-e275-4404-8805-af884fac0750-scripts\") pod \"754727d0-e275-4404-8805-af884fac0750\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.318054 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfn26\" (UniqueName: \"kubernetes.io/projected/754727d0-e275-4404-8805-af884fac0750-kube-api-access-rfn26\") pod \"754727d0-e275-4404-8805-af884fac0750\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.318140 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/754727d0-e275-4404-8805-af884fac0750-dispersionconf\") pod \"754727d0-e275-4404-8805-af884fac0750\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.318158 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/754727d0-e275-4404-8805-af884fac0750-etc-swift\") pod \"754727d0-e275-4404-8805-af884fac0750\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.318568 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/754727d0-e275-4404-8805-af884fac0750-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "754727d0-e275-4404-8805-af884fac0750" (UID: "754727d0-e275-4404-8805-af884fac0750"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.319032 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/754727d0-e275-4404-8805-af884fac0750-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "754727d0-e275-4404-8805-af884fac0750" (UID: "754727d0-e275-4404-8805-af884fac0750"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.323227 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/754727d0-e275-4404-8805-af884fac0750-kube-api-access-rfn26" (OuterVolumeSpecName: "kube-api-access-rfn26") pod "754727d0-e275-4404-8805-af884fac0750" (UID: "754727d0-e275-4404-8805-af884fac0750"). InnerVolumeSpecName "kube-api-access-rfn26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.333650 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/754727d0-e275-4404-8805-af884fac0750-scripts" (OuterVolumeSpecName: "scripts") pod "754727d0-e275-4404-8805-af884fac0750" (UID: "754727d0-e275-4404-8805-af884fac0750"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.335918 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/754727d0-e275-4404-8805-af884fac0750-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "754727d0-e275-4404-8805-af884fac0750" (UID: "754727d0-e275-4404-8805-af884fac0750"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.336898 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/754727d0-e275-4404-8805-af884fac0750-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "754727d0-e275-4404-8805-af884fac0750" (UID: "754727d0-e275-4404-8805-af884fac0750"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.420031 4763 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/754727d0-e275-4404-8805-af884fac0750-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.420064 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/754727d0-e275-4404-8805-af884fac0750-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.420077 4763 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/754727d0-e275-4404-8805-af884fac0750-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.420089 4763 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/754727d0-e275-4404-8805-af884fac0750-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.420100 4763 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/754727d0-e275-4404-8805-af884fac0750-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.420111 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfn26\" (UniqueName: \"kubernetes.io/projected/754727d0-e275-4404-8805-af884fac0750-kube-api-access-rfn26\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.434906 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk"] Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.444584 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk"] Jan 31 15:13:49 crc kubenswrapper[4763]: I0131 15:13:49.059963 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="754727d0-e275-4404-8805-af884fac0750" path="/var/lib/kubelet/pods/754727d0-e275-4404-8805-af884fac0750/volumes" Jan 31 15:13:49 crc kubenswrapper[4763]: I0131 15:13:49.061049 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92920ef2-27a4-47ea-b8f0-220dc84853e4" path="/var/lib/kubelet/pods/92920ef2-27a4-47ea-b8f0-220dc84853e4/volumes" Jan 31 15:13:49 crc kubenswrapper[4763]: I0131 15:13:49.061758 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d952a9d1-9446-4003-b83c-9603f44fb634" path="/var/lib/kubelet/pods/d952a9d1-9446-4003-b83c-9603f44fb634/volumes" Jan 31 15:13:49 crc kubenswrapper[4763]: I0131 15:13:49.089754 4763 scope.go:117] "RemoveContainer" containerID="e321792cb928347235568231c246985837d0fc98a70e7f41e43b1fee17043a7c" Jan 31 15:13:49 crc kubenswrapper[4763]: I0131 15:13:49.089817 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.395282 4763 generic.go:334] "Generic (PLEG): container finished" podID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerID="c92a67d175e7b2e0d2f4e6d03791a25078677c370b55cc124a64ac4580b1a186" exitCode=137 Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.395455 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerDied","Data":"c92a67d175e7b2e0d2f4e6d03791a25078677c370b55cc124a64ac4580b1a186"} Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.405499 4763 generic.go:334] "Generic (PLEG): container finished" podID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerID="a0512f3a95ac09e16cf86ec9e8bfe2bd51334b12e32f11edcc9924d32d0317da" exitCode=137 Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.405562 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerDied","Data":"a0512f3a95ac09e16cf86ec9e8bfe2bd51334b12e32f11edcc9924d32d0317da"} Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.424164 4763 generic.go:334] "Generic (PLEG): container finished" podID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerID="0faa735c202a6e16bf9b9e8948e5792bcc5154cd3f56ada820b906ae813092c6" exitCode=137 Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.424208 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerDied","Data":"0faa735c202a6e16bf9b9e8948e5792bcc5154cd3f56ada820b906ae813092c6"} Jan 31 15:14:17 crc kubenswrapper[4763]: E0131 15:14:17.532909 4763 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d83c163_ac13_4c3c_82cb_da30bdb664d4.slice/crio-conmon-0faa735c202a6e16bf9b9e8948e5792bcc5154cd3f56ada820b906ae813092c6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fd7b0a2_4f68_44bc_8720_1dcb2d975beb.slice/crio-conmon-a0512f3a95ac09e16cf86ec9e8bfe2bd51334b12e32f11edcc9924d32d0317da.scope\": RecentStats: unable to find data in memory cache]" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.602609 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.740509 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.745283 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.773981 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0d83c163-ac13-4c3c-82cb-da30bdb664d4-etc-swift\") pod \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.774045 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.774095 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5twbl\" (UniqueName: \"kubernetes.io/projected/0d83c163-ac13-4c3c-82cb-da30bdb664d4-kube-api-access-5twbl\") pod \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.774166 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0d83c163-ac13-4c3c-82cb-da30bdb664d4-cache\") pod \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.774187 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0d83c163-ac13-4c3c-82cb-da30bdb664d4-lock\") pod \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.774815 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d83c163-ac13-4c3c-82cb-da30bdb664d4-lock" (OuterVolumeSpecName: "lock") pod "0d83c163-ac13-4c3c-82cb-da30bdb664d4" (UID: "0d83c163-ac13-4c3c-82cb-da30bdb664d4"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.775146 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d83c163-ac13-4c3c-82cb-da30bdb664d4-cache" (OuterVolumeSpecName: "cache") pod "0d83c163-ac13-4c3c-82cb-da30bdb664d4" (UID: "0d83c163-ac13-4c3c-82cb-da30bdb664d4"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.779778 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "swift") pod "0d83c163-ac13-4c3c-82cb-da30bdb664d4" (UID: "0d83c163-ac13-4c3c-82cb-da30bdb664d4"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.779783 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d83c163-ac13-4c3c-82cb-da30bdb664d4-kube-api-access-5twbl" (OuterVolumeSpecName: "kube-api-access-5twbl") pod "0d83c163-ac13-4c3c-82cb-da30bdb664d4" (UID: "0d83c163-ac13-4c3c-82cb-da30bdb664d4"). InnerVolumeSpecName "kube-api-access-5twbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.779848 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d83c163-ac13-4c3c-82cb-da30bdb664d4-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "0d83c163-ac13-4c3c-82cb-da30bdb664d4" (UID: "0d83c163-ac13-4c3c-82cb-da30bdb664d4"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.876095 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-cache\") pod \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.876146 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.876197 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjgnx\" (UniqueName: \"kubernetes.io/projected/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-kube-api-access-mjgnx\") pod \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.876259 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzq5q\" (UniqueName: \"kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-kube-api-access-fzq5q\") pod \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.876347 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-lock\") pod \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.876498 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-etc-swift\") pod \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.876690 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-cache\") pod \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.876815 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-lock\") pod \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.876855 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift\") pod \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.876920 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.877050 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-lock" (OuterVolumeSpecName: "lock") pod "6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" (UID: "6c4f98e3-1507-44ae-9eb7-2247ab0e37bf"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.877162 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-cache" (OuterVolumeSpecName: "cache") pod "6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" (UID: "6c4f98e3-1507-44ae-9eb7-2247ab0e37bf"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.877203 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-cache" (OuterVolumeSpecName: "cache") pod "0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" (UID: "0fd7b0a2-4f68-44bc-8720-1dcb2d975beb"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.877869 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-lock" (OuterVolumeSpecName: "lock") pod "0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" (UID: "0fd7b0a2-4f68-44bc-8720-1dcb2d975beb"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.878111 4763 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-cache\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.878145 4763 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0d83c163-ac13-4c3c-82cb-da30bdb664d4-cache\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.878172 4763 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0d83c163-ac13-4c3c-82cb-da30bdb664d4-lock\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.878198 4763 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-lock\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.878225 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0d83c163-ac13-4c3c-82cb-da30bdb664d4-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.878274 4763 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.878302 4763 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-cache\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.878326 4763 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-lock\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.878354 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5twbl\" (UniqueName: \"kubernetes.io/projected/0d83c163-ac13-4c3c-82cb-da30bdb664d4-kube-api-access-5twbl\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.880035 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "swift") pod "6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" (UID: "6c4f98e3-1507-44ae-9eb7-2247ab0e37bf"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.880162 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-kube-api-access-mjgnx" (OuterVolumeSpecName: "kube-api-access-mjgnx") pod "0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" (UID: "0fd7b0a2-4f68-44bc-8720-1dcb2d975beb"). InnerVolumeSpecName "kube-api-access-mjgnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.881115 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "swift") pod "0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" (UID: "0fd7b0a2-4f68-44bc-8720-1dcb2d975beb"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.881739 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" (UID: "6c4f98e3-1507-44ae-9eb7-2247ab0e37bf"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.882943 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-kube-api-access-fzq5q" (OuterVolumeSpecName: "kube-api-access-fzq5q") pod "6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" (UID: "6c4f98e3-1507-44ae-9eb7-2247ab0e37bf"). InnerVolumeSpecName "kube-api-access-fzq5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.883052 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" (UID: "0fd7b0a2-4f68-44bc-8720-1dcb2d975beb"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.904276 4763 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.979659 4763 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.979735 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.979812 4763 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.979840 4763 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.979859 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjgnx\" (UniqueName: \"kubernetes.io/projected/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-kube-api-access-mjgnx\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.979876 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzq5q\" (UniqueName: \"kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-kube-api-access-fzq5q\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.979891 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.995001 4763 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.007472 4763 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.082560 4763 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.082616 4763 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.446804 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerDied","Data":"2e0d604e834eeb3cec78cfb460add37121eda53b4eea073ea361954901be3261"} Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.447294 4763 scope.go:117] "RemoveContainer" containerID="0faa735c202a6e16bf9b9e8948e5792bcc5154cd3f56ada820b906ae813092c6" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.446926 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.463339 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerDied","Data":"fc755e76f946cbf270f23b7d66505db67f9f56fbc2a1281c6fec2037e341ce02"} Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.463493 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.491077 4763 scope.go:117] "RemoveContainer" containerID="fc85ed3fbad7523183d7b9b76e6d1c6a4d902df91667936a1b1b3eb39ced033a" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.511621 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.515109 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerDied","Data":"90e93209198d21ecb365141d71e0805661beff6af7f209b3c4513e9211db45f2"} Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.515246 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.532749 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.546649 4763 scope.go:117] "RemoveContainer" containerID="97469e71426efac22b111ee22106730f06552f0b2b6a901f3e3e8bc6c68198dc" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.549214 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.560561 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.572091 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.576043 4763 scope.go:117] "RemoveContainer" containerID="02a531233c8eff48d67d9eac0b9ddc4fec244753f6b3bfb9b21057bdb7be4c7c" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.582669 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.592851 4763 scope.go:117] "RemoveContainer" containerID="82d538c669bae5f7f95077ed345669166fe3742501df486afd6361232d21fcc5" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.611160 4763 scope.go:117] "RemoveContainer" containerID="e4b43e4f4b97a5362dd61572955b58929d58cd939580340917dd6856eee48c51" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.625663 4763 scope.go:117] "RemoveContainer" containerID="772b6f037f17f533f5cff02df3680cef890ca71af7aadc9080514f6990cfd4dc" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.641561 4763 scope.go:117] "RemoveContainer" containerID="f4ab4db4c5d9dcb6ccd57b84ea36065134a3bd7c3fff7268ce34ac3b6e1ba0ee" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.654853 4763 scope.go:117] "RemoveContainer" containerID="87fd27d178e3710cca6b6fe084307cf387592c330affd1b08797d73b27da0729" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.672720 4763 scope.go:117] "RemoveContainer" containerID="46dfcdcd0be805e9a45249b9185005e90a2633a0552782f7c25619de9a9e01d7" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.686510 4763 scope.go:117] "RemoveContainer" containerID="b145444ea56aafdcef5964b190965b77165b7812022b77df5b6a7fb787ea1f6b" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.700719 4763 scope.go:117] "RemoveContainer" containerID="db9e135b4f60c10dc082344ade61ed089817f4f5bb60cf75e034bd8fb673be95" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.726439 4763 scope.go:117] "RemoveContainer" containerID="95874f605a14427c61a1fe2f12f189c9c62e0310731ed8b97d4aa745deb8470a" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.754904 4763 scope.go:117] "RemoveContainer" containerID="43761f858930e3361c93e26c546d75f9049fbeb0aed2f837e6aa6e57845c2002" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.828877 4763 scope.go:117] "RemoveContainer" containerID="9ec12f6ac4fb2fe1a0eaa6ae49e4dc13f98f460950b7abbd35ef0d14386f1613" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.851852 4763 scope.go:117] "RemoveContainer" containerID="c92a67d175e7b2e0d2f4e6d03791a25078677c370b55cc124a64ac4580b1a186" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.867411 4763 scope.go:117] "RemoveContainer" containerID="c27b98be7532909d83c273cb34dc9fd9b883ddc36da5b9f8e235750c5110926b" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.882067 4763 scope.go:117] "RemoveContainer" containerID="758ecdf114748a274e883c4dfb0b65ebf29c160068e2fb6ba6a6ba341b7ad8cc" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.895958 4763 scope.go:117] "RemoveContainer" containerID="3e3a3d2095bc51e457a624c288843b1229f6edd0c9886caeecc8ae8c8fc0b366" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.909651 4763 scope.go:117] "RemoveContainer" containerID="9953487bd5a77e9cae7330ff25f691cf98544a779a3aaffa22289f618b2ab84e" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.925525 4763 scope.go:117] "RemoveContainer" containerID="f8537cd557c893811f4771b5d344fb7198348c938522cc5297a57e159c08bb1d" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.939129 4763 scope.go:117] "RemoveContainer" containerID="c90149cab3e82c3b3aa0479a8bfbfea30e029a20dbf49c2af33d575a0e5ad411" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.960780 4763 scope.go:117] "RemoveContainer" containerID="79013075c8cee2c124b6f705985ce81a282712a5c1945c3a1ae8d6d583285d31" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.982312 4763 scope.go:117] "RemoveContainer" containerID="f9c131e99fc59700893d4fbe2f7af6bf69bbf3f78f285398911d18a20c8ec6ea" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.002079 4763 scope.go:117] "RemoveContainer" containerID="79c3d3910a7c0f333be18e5912411834c0bc3275953b7a51d75d52ac387ddcaa" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.017886 4763 scope.go:117] "RemoveContainer" containerID="e6a2d028a5029018d31ffce2a967e9d0ddb51a1a19145baba94de8eb996b72f0" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.031295 4763 scope.go:117] "RemoveContainer" containerID="bc321d01362d270731a7e972d44081e349a9ee7b3efcaf30f8babf36901ecf1e" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.046490 4763 scope.go:117] "RemoveContainer" containerID="8c3af34ffb6d8a6328c95ad20cd01fd80df1d4aa3cf85710910e19ed4a069343" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.051427 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" path="/var/lib/kubelet/pods/0d83c163-ac13-4c3c-82cb-da30bdb664d4/volumes" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.053237 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" path="/var/lib/kubelet/pods/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb/volumes" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.055338 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" path="/var/lib/kubelet/pods/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf/volumes" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.062615 4763 scope.go:117] "RemoveContainer" containerID="26d18a07f1ba356bb7df9ab96b968aba9c5d64937ec37a59fae70520b88066d5" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.086863 4763 scope.go:117] "RemoveContainer" containerID="e8f41f27d9646e9b7f047f60ad0964f6a5ebd721b98bf742430a0e9d49f019dd" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.114424 4763 scope.go:117] "RemoveContainer" containerID="a0512f3a95ac09e16cf86ec9e8bfe2bd51334b12e32f11edcc9924d32d0317da" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.134365 4763 scope.go:117] "RemoveContainer" containerID="2801ebc5b5111a3f354bc751c7f4df5a35e9e0f9f8c1b8f97dd7437b72bd0333" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.155088 4763 scope.go:117] "RemoveContainer" containerID="bcd8336fa8422ff8f4710866669ead2747397c3d7e3479845eca695012773964" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.179361 4763 scope.go:117] "RemoveContainer" containerID="11ca22dc3aea18b7002ee209c33ea918f1a6a74de6a87b3e0282c7b043144034" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.194896 4763 scope.go:117] "RemoveContainer" containerID="53cc7d03295052ed760abf6e7bf461dbb5c915e085d4733f09f4b88dee41fce6" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.227535 4763 scope.go:117] "RemoveContainer" containerID="44221e09a6221477f521497096958975eb2d81c909e9df7abcc70e7a6affe606" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.244570 4763 scope.go:117] "RemoveContainer" containerID="900954e8a49f5e385b29971b0076eb09818fee16df406d5bda2ffa32153b8f2c" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.258197 4763 scope.go:117] "RemoveContainer" containerID="b347ad616d945afabba996ef7b64761509d97dec2de5dbce22818ebd309bb2fd" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.281082 4763 scope.go:117] "RemoveContainer" containerID="65af4eadceb3880bfd8c0ae7c91be9262b951343a51ea6f4b387d7805c18a203" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.295563 4763 scope.go:117] "RemoveContainer" containerID="62e05d6e34f51a24fb36847b0e659c0ded9040b7a9df03f170ac2888f520a2f7" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.308029 4763 scope.go:117] "RemoveContainer" containerID="c1724ae1efdd76da3683f41ca56bb7dec66988978a2f0f763be0b355c56b24d4" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.323005 4763 scope.go:117] "RemoveContainer" containerID="db2e802c7fb1380e7d58552715a2bbc320cf91d8405eb9d9f94c80688bdaa86e" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.338522 4763 scope.go:117] "RemoveContainer" containerID="b66057b270aafdfd00402f417700789d4eb2a09f6b95a3210ba44757a3173fb1" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.352116 4763 scope.go:117] "RemoveContainer" containerID="1162fdbabfecb0f41f85a7d666a0cc25c2eb72256d57c4d4b10c1e0aa3ad58c6" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.367295 4763 scope.go:117] "RemoveContainer" containerID="afa6a3f7b8649103325ad5315e9dc80fef7ee37028bfc085ed5215505185d150" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.468672 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.469503 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="account-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.469524 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="account-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.469542 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.469555 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.469578 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d952a9d1-9446-4003-b83c-9603f44fb634" containerName="proxy-httpd" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.469593 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="d952a9d1-9446-4003-b83c-9603f44fb634" containerName="proxy-httpd" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.469610 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="container-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.469622 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="container-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.469641 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.469652 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.469674 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.469686 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.469746 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.469761 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-server" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.469779 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="account-reaper" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.469792 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="account-reaper" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.469809 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="container-updater" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.469821 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="container-updater" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.469842 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="container-updater" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.469854 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="container-updater" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.469870 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.469882 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.469902 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="account-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.469914 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="account-server" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.469928 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="container-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.469940 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="container-server" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.469958 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="account-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.469970 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="account-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.469992 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="container-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470003 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="container-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470024 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-updater" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470037 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-updater" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470055 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="swift-recon-cron" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470068 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="swift-recon-cron" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470086 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="rsync" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470098 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="rsync" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470119 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="account-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470131 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="account-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470144 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="swift-recon-cron" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470155 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="swift-recon-cron" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470174 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="account-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470189 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="account-server" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470204 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="account-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470216 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="account-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470232 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="754727d0-e275-4404-8805-af884fac0750" containerName="swift-ring-rebalance" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470244 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="754727d0-e275-4404-8805-af884fac0750" containerName="swift-ring-rebalance" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470265 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="container-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470277 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="container-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470296 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="container-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470308 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="container-server" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470320 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-expirer" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470332 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-expirer" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470353 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="rsync" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470364 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="rsync" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470388 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="account-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470400 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="account-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470415 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="account-reaper" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470428 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="account-reaper" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470447 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-updater" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470458 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-updater" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470479 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-expirer" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470490 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-expirer" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470507 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470518 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-server" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470532 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="container-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470544 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="container-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470564 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-expirer" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470577 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-expirer" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470593 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="container-updater" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470604 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="container-updater" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470623 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="container-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470635 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="container-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470653 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="container-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470665 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="container-server" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470688 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="account-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470729 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="account-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470749 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d952a9d1-9446-4003-b83c-9603f44fb634" containerName="proxy-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470765 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="d952a9d1-9446-4003-b83c-9603f44fb634" containerName="proxy-server" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470791 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-updater" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470807 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-updater" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470831 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470847 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470871 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="container-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470888 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="container-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470912 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="rsync" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470924 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="rsync" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470938 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="account-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471332 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="account-server" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.471358 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471370 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.471388 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471400 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-server" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.471419 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="account-reaper" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471431 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="account-reaper" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.471452 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="swift-recon-cron" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471464 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="swift-recon-cron" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471737 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="container-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471756 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="account-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471771 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="swift-recon-cron" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471790 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="rsync" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471808 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471820 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-expirer" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471842 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471856 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="swift-recon-cron" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471873 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471889 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="account-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471904 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="rsync" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471915 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471934 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-updater" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471949 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-updater" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471964 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="account-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471987 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472012 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472038 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="d952a9d1-9446-4003-b83c-9603f44fb634" containerName="proxy-httpd" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472063 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="container-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472083 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-expirer" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472099 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="account-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472118 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="account-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472134 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="d952a9d1-9446-4003-b83c-9603f44fb634" containerName="proxy-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472150 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-updater" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472193 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="container-updater" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472211 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="container-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472228 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="account-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472245 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="account-reaper" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472260 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="account-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472276 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="account-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472293 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="container-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472307 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="container-updater" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472324 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="account-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472336 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="container-updater" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472352 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="rsync" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472370 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="swift-recon-cron" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472383 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-expirer" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472400 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="account-reaper" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472411 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="account-reaper" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472426 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="container-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472440 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="container-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472454 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="container-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472467 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="container-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472482 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472497 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472511 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="754727d0-e275-4404-8805-af884fac0750" containerName="swift-ring-rebalance" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472525 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="container-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472540 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.480443 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.483052 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-swift-dockercfg-nv564" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.486772 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-files" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.486980 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-storage-config-data" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.488675 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-conf" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.503155 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.634497 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd4v2\" (UniqueName: \"kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-kube-api-access-pd4v2\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.634528 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.634558 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0f637cde-45f1-4c1e-b345-7f89e17eccc6-cache\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.634622 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0f637cde-45f1-4c1e-b345-7f89e17eccc6-lock\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.634653 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.736663 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0f637cde-45f1-4c1e-b345-7f89e17eccc6-lock\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.736765 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.736859 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd4v2\" (UniqueName: \"kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-kube-api-access-pd4v2\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.736889 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.736924 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0f637cde-45f1-4c1e-b345-7f89e17eccc6-cache\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.737306 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.737394 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.737441 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0f637cde-45f1-4c1e-b345-7f89e17eccc6-cache\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.737560 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift podName:0f637cde-45f1-4c1e-b345-7f89e17eccc6 nodeName:}" failed. No retries permitted until 2026-01-31 15:14:22.237540842 +0000 UTC m=+1181.992279225 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift") pod "swift-storage-0" (UID: "0f637cde-45f1-4c1e-b345-7f89e17eccc6") : configmap "swift-ring-files" not found Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.737681 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0f637cde-45f1-4c1e-b345-7f89e17eccc6-lock\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.737842 4763 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") device mount path \"/mnt/openstack/pv03\"" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.759935 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd4v2\" (UniqueName: \"kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-kube-api-access-pd4v2\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.768741 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:22 crc kubenswrapper[4763]: I0131 15:14:22.244028 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:22 crc kubenswrapper[4763]: E0131 15:14:22.244263 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:14:22 crc kubenswrapper[4763]: E0131 15:14:22.244298 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:14:22 crc kubenswrapper[4763]: E0131 15:14:22.244395 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift podName:0f637cde-45f1-4c1e-b345-7f89e17eccc6 nodeName:}" failed. No retries permitted until 2026-01-31 15:14:23.244357124 +0000 UTC m=+1182.999095447 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift") pod "swift-storage-0" (UID: "0f637cde-45f1-4c1e-b345-7f89e17eccc6") : configmap "swift-ring-files" not found Jan 31 15:14:23 crc kubenswrapper[4763]: I0131 15:14:23.258870 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:23 crc kubenswrapper[4763]: E0131 15:14:23.259195 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:14:23 crc kubenswrapper[4763]: E0131 15:14:23.259240 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:14:23 crc kubenswrapper[4763]: E0131 15:14:23.259346 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift podName:0f637cde-45f1-4c1e-b345-7f89e17eccc6 nodeName:}" failed. No retries permitted until 2026-01-31 15:14:25.259312643 +0000 UTC m=+1185.014050986 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift") pod "swift-storage-0" (UID: "0f637cde-45f1-4c1e-b345-7f89e17eccc6") : configmap "swift-ring-files" not found Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.290768 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:25 crc kubenswrapper[4763]: E0131 15:14:25.291062 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:14:25 crc kubenswrapper[4763]: E0131 15:14:25.291447 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:14:25 crc kubenswrapper[4763]: E0131 15:14:25.291543 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift podName:0f637cde-45f1-4c1e-b345-7f89e17eccc6 nodeName:}" failed. No retries permitted until 2026-01-31 15:14:29.29151125 +0000 UTC m=+1189.046249583 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift") pod "swift-storage-0" (UID: "0f637cde-45f1-4c1e-b345-7f89e17eccc6") : configmap "swift-ring-files" not found Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.354986 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-6fg48"] Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.355999 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.357939 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-proxy-config-data" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.358347 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.358719 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.378256 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-6fg48"] Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.493781 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/52d85cde-9969-46d1-9e16-44c5747493cc-ring-data-devices\") pod \"swift-ring-rebalance-6fg48\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.493832 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqjcf\" (UniqueName: \"kubernetes.io/projected/52d85cde-9969-46d1-9e16-44c5747493cc-kube-api-access-gqjcf\") pod \"swift-ring-rebalance-6fg48\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.494043 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/52d85cde-9969-46d1-9e16-44c5747493cc-swiftconf\") pod \"swift-ring-rebalance-6fg48\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.494131 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52d85cde-9969-46d1-9e16-44c5747493cc-scripts\") pod \"swift-ring-rebalance-6fg48\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.494184 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/52d85cde-9969-46d1-9e16-44c5747493cc-etc-swift\") pod \"swift-ring-rebalance-6fg48\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.494244 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/52d85cde-9969-46d1-9e16-44c5747493cc-dispersionconf\") pod \"swift-ring-rebalance-6fg48\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.595865 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52d85cde-9969-46d1-9e16-44c5747493cc-scripts\") pod \"swift-ring-rebalance-6fg48\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.596266 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/52d85cde-9969-46d1-9e16-44c5747493cc-etc-swift\") pod \"swift-ring-rebalance-6fg48\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.596446 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/52d85cde-9969-46d1-9e16-44c5747493cc-dispersionconf\") pod \"swift-ring-rebalance-6fg48\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.596725 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/52d85cde-9969-46d1-9e16-44c5747493cc-etc-swift\") pod \"swift-ring-rebalance-6fg48\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.596858 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52d85cde-9969-46d1-9e16-44c5747493cc-scripts\") pod \"swift-ring-rebalance-6fg48\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.596872 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/52d85cde-9969-46d1-9e16-44c5747493cc-ring-data-devices\") pod \"swift-ring-rebalance-6fg48\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.597170 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqjcf\" (UniqueName: \"kubernetes.io/projected/52d85cde-9969-46d1-9e16-44c5747493cc-kube-api-access-gqjcf\") pod \"swift-ring-rebalance-6fg48\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.597369 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/52d85cde-9969-46d1-9e16-44c5747493cc-swiftconf\") pod \"swift-ring-rebalance-6fg48\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.597391 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/52d85cde-9969-46d1-9e16-44c5747493cc-ring-data-devices\") pod \"swift-ring-rebalance-6fg48\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.601516 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/52d85cde-9969-46d1-9e16-44c5747493cc-swiftconf\") pod \"swift-ring-rebalance-6fg48\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.606285 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/52d85cde-9969-46d1-9e16-44c5747493cc-dispersionconf\") pod \"swift-ring-rebalance-6fg48\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.621759 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqjcf\" (UniqueName: \"kubernetes.io/projected/52d85cde-9969-46d1-9e16-44c5747493cc-kube-api-access-gqjcf\") pod \"swift-ring-rebalance-6fg48\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.685078 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.914170 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-6fg48"] Jan 31 15:14:26 crc kubenswrapper[4763]: I0131 15:14:26.621816 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" event={"ID":"52d85cde-9969-46d1-9e16-44c5747493cc","Type":"ContainerStarted","Data":"203b36129261a511c80fe2b8e1a92066fc0b81cc45cfe796a72d0edaa9da1993"} Jan 31 15:14:26 crc kubenswrapper[4763]: I0131 15:14:26.621883 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" event={"ID":"52d85cde-9969-46d1-9e16-44c5747493cc","Type":"ContainerStarted","Data":"49906396a5fab4b0e67ab645b9ec1480523b83bf3b1aa1767c5464699ae64df6"} Jan 31 15:14:26 crc kubenswrapper[4763]: I0131 15:14:26.663401 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" podStartSLOduration=1.66336668 podStartE2EDuration="1.66336668s" podCreationTimestamp="2026-01-31 15:14:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:14:26.651376105 +0000 UTC m=+1186.406114398" watchObservedRunningTime="2026-01-31 15:14:26.66336668 +0000 UTC m=+1186.418105013" Jan 31 15:14:29 crc kubenswrapper[4763]: I0131 15:14:29.362183 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:29 crc kubenswrapper[4763]: E0131 15:14:29.362435 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:14:29 crc kubenswrapper[4763]: E0131 15:14:29.362480 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:14:29 crc kubenswrapper[4763]: E0131 15:14:29.362576 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift podName:0f637cde-45f1-4c1e-b345-7f89e17eccc6 nodeName:}" failed. No retries permitted until 2026-01-31 15:14:37.362541369 +0000 UTC m=+1197.117279692 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift") pod "swift-storage-0" (UID: "0f637cde-45f1-4c1e-b345-7f89e17eccc6") : configmap "swift-ring-files" not found Jan 31 15:14:32 crc kubenswrapper[4763]: I0131 15:14:32.671174 4763 generic.go:334] "Generic (PLEG): container finished" podID="52d85cde-9969-46d1-9e16-44c5747493cc" containerID="203b36129261a511c80fe2b8e1a92066fc0b81cc45cfe796a72d0edaa9da1993" exitCode=0 Jan 31 15:14:32 crc kubenswrapper[4763]: I0131 15:14:32.671238 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" event={"ID":"52d85cde-9969-46d1-9e16-44c5747493cc","Type":"ContainerDied","Data":"203b36129261a511c80fe2b8e1a92066fc0b81cc45cfe796a72d0edaa9da1993"} Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.010053 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.133379 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52d85cde-9969-46d1-9e16-44c5747493cc-scripts\") pod \"52d85cde-9969-46d1-9e16-44c5747493cc\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.133465 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqjcf\" (UniqueName: \"kubernetes.io/projected/52d85cde-9969-46d1-9e16-44c5747493cc-kube-api-access-gqjcf\") pod \"52d85cde-9969-46d1-9e16-44c5747493cc\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.133606 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/52d85cde-9969-46d1-9e16-44c5747493cc-etc-swift\") pod \"52d85cde-9969-46d1-9e16-44c5747493cc\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.133651 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/52d85cde-9969-46d1-9e16-44c5747493cc-swiftconf\") pod \"52d85cde-9969-46d1-9e16-44c5747493cc\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.133691 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/52d85cde-9969-46d1-9e16-44c5747493cc-dispersionconf\") pod \"52d85cde-9969-46d1-9e16-44c5747493cc\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.133804 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/52d85cde-9969-46d1-9e16-44c5747493cc-ring-data-devices\") pod \"52d85cde-9969-46d1-9e16-44c5747493cc\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.134295 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52d85cde-9969-46d1-9e16-44c5747493cc-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "52d85cde-9969-46d1-9e16-44c5747493cc" (UID: "52d85cde-9969-46d1-9e16-44c5747493cc"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.134643 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52d85cde-9969-46d1-9e16-44c5747493cc-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "52d85cde-9969-46d1-9e16-44c5747493cc" (UID: "52d85cde-9969-46d1-9e16-44c5747493cc"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.140272 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52d85cde-9969-46d1-9e16-44c5747493cc-kube-api-access-gqjcf" (OuterVolumeSpecName: "kube-api-access-gqjcf") pod "52d85cde-9969-46d1-9e16-44c5747493cc" (UID: "52d85cde-9969-46d1-9e16-44c5747493cc"). InnerVolumeSpecName "kube-api-access-gqjcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.144101 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52d85cde-9969-46d1-9e16-44c5747493cc-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "52d85cde-9969-46d1-9e16-44c5747493cc" (UID: "52d85cde-9969-46d1-9e16-44c5747493cc"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.158196 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52d85cde-9969-46d1-9e16-44c5747493cc-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "52d85cde-9969-46d1-9e16-44c5747493cc" (UID: "52d85cde-9969-46d1-9e16-44c5747493cc"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.163734 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52d85cde-9969-46d1-9e16-44c5747493cc-scripts" (OuterVolumeSpecName: "scripts") pod "52d85cde-9969-46d1-9e16-44c5747493cc" (UID: "52d85cde-9969-46d1-9e16-44c5747493cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.235725 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/52d85cde-9969-46d1-9e16-44c5747493cc-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.236005 4763 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/52d85cde-9969-46d1-9e16-44c5747493cc-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.236014 4763 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/52d85cde-9969-46d1-9e16-44c5747493cc-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.236024 4763 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/52d85cde-9969-46d1-9e16-44c5747493cc-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.236033 4763 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52d85cde-9969-46d1-9e16-44c5747493cc-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.236044 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqjcf\" (UniqueName: \"kubernetes.io/projected/52d85cde-9969-46d1-9e16-44c5747493cc-kube-api-access-gqjcf\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.690563 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" event={"ID":"52d85cde-9969-46d1-9e16-44c5747493cc","Type":"ContainerDied","Data":"49906396a5fab4b0e67ab645b9ec1480523b83bf3b1aa1767c5464699ae64df6"} Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.690890 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49906396a5fab4b0e67ab645b9ec1480523b83bf3b1aa1767c5464699ae64df6" Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.690642 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:37 crc kubenswrapper[4763]: I0131 15:14:37.383782 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:37 crc kubenswrapper[4763]: I0131 15:14:37.405010 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:37 crc kubenswrapper[4763]: I0131 15:14:37.412161 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:37 crc kubenswrapper[4763]: I0131 15:14:37.901647 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:14:38 crc kubenswrapper[4763]: I0131 15:14:38.725008 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerStarted","Data":"42ec21618efaa5ddb06f0dc914d466c534df09064f84bd8dfd7d4e397b2993dc"} Jan 31 15:14:38 crc kubenswrapper[4763]: I0131 15:14:38.725296 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerStarted","Data":"ad6aed25c6e7195d02585e59f08a712946e5140839e590d00e1fd3b656f906e1"} Jan 31 15:14:38 crc kubenswrapper[4763]: I0131 15:14:38.725311 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerStarted","Data":"9deafbdec521654222135fae4a75189b7d212a38e26a9e35528481bb48dfe25d"} Jan 31 15:14:38 crc kubenswrapper[4763]: I0131 15:14:38.725320 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerStarted","Data":"6e2e9e90507c6c3f151001d225cf3ffd2d74fbb47248c3805eb15d884a5abba9"} Jan 31 15:14:39 crc kubenswrapper[4763]: I0131 15:14:39.740377 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerStarted","Data":"eb52d0ca91a944d5c8ad2fa653c9fa6ee50c07649415a73ade8c5c4153e97392"} Jan 31 15:14:39 crc kubenswrapper[4763]: I0131 15:14:39.740780 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerStarted","Data":"ddb0e5002dc17cb138568e744262f32bbfd952fee6858962182e6dd4037f223b"} Jan 31 15:14:39 crc kubenswrapper[4763]: I0131 15:14:39.740796 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerStarted","Data":"38e47bd1962160574137a5d84c3fcba7de4d7e7d941eaa6fc85fc7d17c701c2b"} Jan 31 15:14:39 crc kubenswrapper[4763]: I0131 15:14:39.740805 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerStarted","Data":"7d80e2b71f4cf2dfe389248e41642348209b0bf63a38f66e95b3a2c6e7260234"} Jan 31 15:14:39 crc kubenswrapper[4763]: I0131 15:14:39.740813 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerStarted","Data":"0ed0e66180b91482a31099e92316c33c11bb10524bf629042109c378e3080682"} Jan 31 15:14:39 crc kubenswrapper[4763]: I0131 15:14:39.740823 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerStarted","Data":"485cbd77da69f3c2a508872f5ca6e889f79adf0023dc9705725e5bb72e9b68f0"} Jan 31 15:14:40 crc kubenswrapper[4763]: I0131 15:14:40.753277 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerStarted","Data":"9c5b86851732a69e6e48096dd6f1b888f12fe142cae2d0bd8bef364d2e611b8e"} Jan 31 15:14:40 crc kubenswrapper[4763]: I0131 15:14:40.754250 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerStarted","Data":"5af94547d6eae1fcceeb9f2af0d269373db899e6b1f77d785fa71bb533c4ff1e"} Jan 31 15:14:40 crc kubenswrapper[4763]: I0131 15:14:40.754361 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerStarted","Data":"370769a37b25c50b9d2a6455c870de2ac9df87a2e7618e148b0d0f1c2ea87ea4"} Jan 31 15:14:40 crc kubenswrapper[4763]: I0131 15:14:40.754467 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerStarted","Data":"66b1c2a9de2f47ccee33ed4084db9a99570169d12537c16e6734017f17b75f44"} Jan 31 15:14:40 crc kubenswrapper[4763]: I0131 15:14:40.754543 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerStarted","Data":"6eded90931167b47c7a89da68469090108a9e35bce81085ea08e8704905bbdf6"} Jan 31 15:14:40 crc kubenswrapper[4763]: I0131 15:14:40.754646 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerStarted","Data":"f11355f4c90e474a9eb7e8bffad1d97535b6d05264b819d2663c38b16a03b8b0"} Jan 31 15:14:40 crc kubenswrapper[4763]: I0131 15:14:40.754727 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerStarted","Data":"16d25b731fa680bf545bedd9ee8124c58230c5826aad06eaed21d44ebcb95b3f"} Jan 31 15:14:41 crc kubenswrapper[4763]: I0131 15:14:41.801674 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-0" podStartSLOduration=21.80165402 podStartE2EDuration="21.80165402s" podCreationTimestamp="2026-01-31 15:14:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:14:41.796405633 +0000 UTC m=+1201.551143926" watchObservedRunningTime="2026-01-31 15:14:41.80165402 +0000 UTC m=+1201.556392313" Jan 31 15:14:46 crc kubenswrapper[4763]: I0131 15:14:46.862920 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6"] Jan 31 15:14:46 crc kubenswrapper[4763]: E0131 15:14:46.863962 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52d85cde-9969-46d1-9e16-44c5747493cc" containerName="swift-ring-rebalance" Jan 31 15:14:46 crc kubenswrapper[4763]: I0131 15:14:46.863988 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="52d85cde-9969-46d1-9e16-44c5747493cc" containerName="swift-ring-rebalance" Jan 31 15:14:46 crc kubenswrapper[4763]: I0131 15:14:46.864208 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="52d85cde-9969-46d1-9e16-44c5747493cc" containerName="swift-ring-rebalance" Jan 31 15:14:46 crc kubenswrapper[4763]: I0131 15:14:46.867291 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:46 crc kubenswrapper[4763]: I0131 15:14:46.874176 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6"] Jan 31 15:14:46 crc kubenswrapper[4763]: I0131 15:14:46.907547 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-proxy-config-data" Jan 31 15:14:46 crc kubenswrapper[4763]: I0131 15:14:46.944955 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-etc-swift\") pod \"swift-proxy-59c4d74667-fdrd6\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:46 crc kubenswrapper[4763]: I0131 15:14:46.945067 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-log-httpd\") pod \"swift-proxy-59c4d74667-fdrd6\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:46 crc kubenswrapper[4763]: I0131 15:14:46.945125 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-config-data\") pod \"swift-proxy-59c4d74667-fdrd6\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:46 crc kubenswrapper[4763]: I0131 15:14:46.945156 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wzbf\" (UniqueName: \"kubernetes.io/projected/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-kube-api-access-7wzbf\") pod \"swift-proxy-59c4d74667-fdrd6\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:46 crc kubenswrapper[4763]: I0131 15:14:46.945217 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-run-httpd\") pod \"swift-proxy-59c4d74667-fdrd6\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:47 crc kubenswrapper[4763]: I0131 15:14:47.046720 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wzbf\" (UniqueName: \"kubernetes.io/projected/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-kube-api-access-7wzbf\") pod \"swift-proxy-59c4d74667-fdrd6\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:47 crc kubenswrapper[4763]: I0131 15:14:47.046802 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-run-httpd\") pod \"swift-proxy-59c4d74667-fdrd6\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:47 crc kubenswrapper[4763]: I0131 15:14:47.046873 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-etc-swift\") pod \"swift-proxy-59c4d74667-fdrd6\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:47 crc kubenswrapper[4763]: I0131 15:14:47.046916 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-log-httpd\") pod \"swift-proxy-59c4d74667-fdrd6\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:47 crc kubenswrapper[4763]: I0131 15:14:47.046963 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-config-data\") pod \"swift-proxy-59c4d74667-fdrd6\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:47 crc kubenswrapper[4763]: I0131 15:14:47.047684 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-log-httpd\") pod \"swift-proxy-59c4d74667-fdrd6\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:47 crc kubenswrapper[4763]: I0131 15:14:47.047939 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-run-httpd\") pod \"swift-proxy-59c4d74667-fdrd6\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:47 crc kubenswrapper[4763]: I0131 15:14:47.054680 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-config-data\") pod \"swift-proxy-59c4d74667-fdrd6\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:47 crc kubenswrapper[4763]: I0131 15:14:47.055768 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-etc-swift\") pod \"swift-proxy-59c4d74667-fdrd6\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:47 crc kubenswrapper[4763]: I0131 15:14:47.068951 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wzbf\" (UniqueName: \"kubernetes.io/projected/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-kube-api-access-7wzbf\") pod \"swift-proxy-59c4d74667-fdrd6\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:47 crc kubenswrapper[4763]: I0131 15:14:47.228107 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:47 crc kubenswrapper[4763]: I0131 15:14:47.695918 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6"] Jan 31 15:14:47 crc kubenswrapper[4763]: I0131 15:14:47.813664 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" event={"ID":"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0","Type":"ContainerStarted","Data":"b119bcbbd6a3db6b88bff647faeec6bfa2e07015bb14d46d3cb45d7d5768e7de"} Jan 31 15:14:48 crc kubenswrapper[4763]: I0131 15:14:48.820502 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" event={"ID":"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0","Type":"ContainerStarted","Data":"7408e446c82f19d8bca458d55cad69dd2743a440768ce18ecfc4933fcd0795d6"} Jan 31 15:14:48 crc kubenswrapper[4763]: I0131 15:14:48.820752 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" event={"ID":"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0","Type":"ContainerStarted","Data":"c75bfd6add657441b6a71901eb789ee4a6824e5f58e98d0ae1677f3d7f9ec850"} Jan 31 15:14:48 crc kubenswrapper[4763]: I0131 15:14:48.821559 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:48 crc kubenswrapper[4763]: I0131 15:14:48.821580 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:48 crc kubenswrapper[4763]: I0131 15:14:48.844382 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" podStartSLOduration=2.844367173 podStartE2EDuration="2.844367173s" podCreationTimestamp="2026-01-31 15:14:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:14:48.837141623 +0000 UTC m=+1208.591879916" watchObservedRunningTime="2026-01-31 15:14:48.844367173 +0000 UTC m=+1208.599105466" Jan 31 15:14:57 crc kubenswrapper[4763]: I0131 15:14:57.230819 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:57 crc kubenswrapper[4763]: I0131 15:14:57.231328 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:58 crc kubenswrapper[4763]: I0131 15:14:58.876505 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq"] Jan 31 15:14:58 crc kubenswrapper[4763]: I0131 15:14:58.878789 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:58 crc kubenswrapper[4763]: I0131 15:14:58.882316 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 15:14:58 crc kubenswrapper[4763]: I0131 15:14:58.883658 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 15:14:58 crc kubenswrapper[4763]: I0131 15:14:58.884762 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq"] Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.030534 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvg4b\" (UniqueName: \"kubernetes.io/projected/9fa1aa7a-aa74-4771-a74a-51c73cd37867-kube-api-access-dvg4b\") pod \"swift-ring-rebalance-debug-zkqdq\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.030605 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9fa1aa7a-aa74-4771-a74a-51c73cd37867-scripts\") pod \"swift-ring-rebalance-debug-zkqdq\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.030631 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9fa1aa7a-aa74-4771-a74a-51c73cd37867-swiftconf\") pod \"swift-ring-rebalance-debug-zkqdq\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.030667 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9fa1aa7a-aa74-4771-a74a-51c73cd37867-etc-swift\") pod \"swift-ring-rebalance-debug-zkqdq\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.030685 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9fa1aa7a-aa74-4771-a74a-51c73cd37867-ring-data-devices\") pod \"swift-ring-rebalance-debug-zkqdq\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.030731 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9fa1aa7a-aa74-4771-a74a-51c73cd37867-dispersionconf\") pod \"swift-ring-rebalance-debug-zkqdq\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.132057 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvg4b\" (UniqueName: \"kubernetes.io/projected/9fa1aa7a-aa74-4771-a74a-51c73cd37867-kube-api-access-dvg4b\") pod \"swift-ring-rebalance-debug-zkqdq\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.132121 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9fa1aa7a-aa74-4771-a74a-51c73cd37867-scripts\") pod \"swift-ring-rebalance-debug-zkqdq\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.132145 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9fa1aa7a-aa74-4771-a74a-51c73cd37867-swiftconf\") pod \"swift-ring-rebalance-debug-zkqdq\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.132177 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9fa1aa7a-aa74-4771-a74a-51c73cd37867-etc-swift\") pod \"swift-ring-rebalance-debug-zkqdq\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.132194 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9fa1aa7a-aa74-4771-a74a-51c73cd37867-ring-data-devices\") pod \"swift-ring-rebalance-debug-zkqdq\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.132226 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9fa1aa7a-aa74-4771-a74a-51c73cd37867-dispersionconf\") pod \"swift-ring-rebalance-debug-zkqdq\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.133129 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9fa1aa7a-aa74-4771-a74a-51c73cd37867-etc-swift\") pod \"swift-ring-rebalance-debug-zkqdq\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.133372 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9fa1aa7a-aa74-4771-a74a-51c73cd37867-ring-data-devices\") pod \"swift-ring-rebalance-debug-zkqdq\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.133748 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9fa1aa7a-aa74-4771-a74a-51c73cd37867-scripts\") pod \"swift-ring-rebalance-debug-zkqdq\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.137618 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9fa1aa7a-aa74-4771-a74a-51c73cd37867-dispersionconf\") pod \"swift-ring-rebalance-debug-zkqdq\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.137658 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9fa1aa7a-aa74-4771-a74a-51c73cd37867-swiftconf\") pod \"swift-ring-rebalance-debug-zkqdq\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.147464 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvg4b\" (UniqueName: \"kubernetes.io/projected/9fa1aa7a-aa74-4771-a74a-51c73cd37867-kube-api-access-dvg4b\") pod \"swift-ring-rebalance-debug-zkqdq\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.210443 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.615559 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq"] Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.911595 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" event={"ID":"9fa1aa7a-aa74-4771-a74a-51c73cd37867","Type":"ContainerStarted","Data":"dc43ea79353044d825fa92732b826ecb7bb9d81d79e11fde2b7c3d0258701fc2"} Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.911644 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" event={"ID":"9fa1aa7a-aa74-4771-a74a-51c73cd37867","Type":"ContainerStarted","Data":"873824f34424ae78088f76f80ab4ed2704a83af62bd0bddc461bf62e64ee83f7"} Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.936626 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" podStartSLOduration=1.936593841 podStartE2EDuration="1.936593841s" podCreationTimestamp="2026-01-31 15:14:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:14:59.93655674 +0000 UTC m=+1219.691295053" watchObservedRunningTime="2026-01-31 15:14:59.936593841 +0000 UTC m=+1219.691332174" Jan 31 15:15:00 crc kubenswrapper[4763]: I0131 15:15:00.144773 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh"] Jan 31 15:15:00 crc kubenswrapper[4763]: I0131 15:15:00.145829 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh" Jan 31 15:15:00 crc kubenswrapper[4763]: I0131 15:15:00.147813 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 15:15:00 crc kubenswrapper[4763]: I0131 15:15:00.147948 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 15:15:00 crc kubenswrapper[4763]: I0131 15:15:00.153786 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh"] Jan 31 15:15:00 crc kubenswrapper[4763]: I0131 15:15:00.250105 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7b6cd4d-3b2e-456c-afec-739df2a5e910-config-volume\") pod \"collect-profiles-29497875-4wkrh\" (UID: \"e7b6cd4d-3b2e-456c-afec-739df2a5e910\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh" Jan 31 15:15:00 crc kubenswrapper[4763]: I0131 15:15:00.250218 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59lfh\" (UniqueName: \"kubernetes.io/projected/e7b6cd4d-3b2e-456c-afec-739df2a5e910-kube-api-access-59lfh\") pod \"collect-profiles-29497875-4wkrh\" (UID: \"e7b6cd4d-3b2e-456c-afec-739df2a5e910\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh" Jan 31 15:15:00 crc kubenswrapper[4763]: I0131 15:15:00.250313 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e7b6cd4d-3b2e-456c-afec-739df2a5e910-secret-volume\") pod \"collect-profiles-29497875-4wkrh\" (UID: \"e7b6cd4d-3b2e-456c-afec-739df2a5e910\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh" Jan 31 15:15:00 crc kubenswrapper[4763]: I0131 15:15:00.351731 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e7b6cd4d-3b2e-456c-afec-739df2a5e910-secret-volume\") pod \"collect-profiles-29497875-4wkrh\" (UID: \"e7b6cd4d-3b2e-456c-afec-739df2a5e910\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh" Jan 31 15:15:00 crc kubenswrapper[4763]: I0131 15:15:00.351835 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7b6cd4d-3b2e-456c-afec-739df2a5e910-config-volume\") pod \"collect-profiles-29497875-4wkrh\" (UID: \"e7b6cd4d-3b2e-456c-afec-739df2a5e910\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh" Jan 31 15:15:00 crc kubenswrapper[4763]: I0131 15:15:00.352008 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59lfh\" (UniqueName: \"kubernetes.io/projected/e7b6cd4d-3b2e-456c-afec-739df2a5e910-kube-api-access-59lfh\") pod \"collect-profiles-29497875-4wkrh\" (UID: \"e7b6cd4d-3b2e-456c-afec-739df2a5e910\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh" Jan 31 15:15:00 crc kubenswrapper[4763]: I0131 15:15:00.352870 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7b6cd4d-3b2e-456c-afec-739df2a5e910-config-volume\") pod \"collect-profiles-29497875-4wkrh\" (UID: \"e7b6cd4d-3b2e-456c-afec-739df2a5e910\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh" Jan 31 15:15:00 crc kubenswrapper[4763]: I0131 15:15:00.362192 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e7b6cd4d-3b2e-456c-afec-739df2a5e910-secret-volume\") pod \"collect-profiles-29497875-4wkrh\" (UID: \"e7b6cd4d-3b2e-456c-afec-739df2a5e910\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh" Jan 31 15:15:00 crc kubenswrapper[4763]: I0131 15:15:00.372215 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59lfh\" (UniqueName: \"kubernetes.io/projected/e7b6cd4d-3b2e-456c-afec-739df2a5e910-kube-api-access-59lfh\") pod \"collect-profiles-29497875-4wkrh\" (UID: \"e7b6cd4d-3b2e-456c-afec-739df2a5e910\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh" Jan 31 15:15:00 crc kubenswrapper[4763]: I0131 15:15:00.518822 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh" Jan 31 15:15:00 crc kubenswrapper[4763]: I0131 15:15:00.749949 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh"] Jan 31 15:15:00 crc kubenswrapper[4763]: W0131 15:15:00.750201 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7b6cd4d_3b2e_456c_afec_739df2a5e910.slice/crio-183b14b3f0f251597c5eee8bd052ad374fea7ca89988627b65c901cc3a3efbb3 WatchSource:0}: Error finding container 183b14b3f0f251597c5eee8bd052ad374fea7ca89988627b65c901cc3a3efbb3: Status 404 returned error can't find the container with id 183b14b3f0f251597c5eee8bd052ad374fea7ca89988627b65c901cc3a3efbb3 Jan 31 15:15:00 crc kubenswrapper[4763]: I0131 15:15:00.919885 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh" event={"ID":"e7b6cd4d-3b2e-456c-afec-739df2a5e910","Type":"ContainerStarted","Data":"2b0b52b31674e8c05c098868af13d05d526cd753e9a261dde704ad7aa24d9b2d"} Jan 31 15:15:00 crc kubenswrapper[4763]: I0131 15:15:00.920448 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh" event={"ID":"e7b6cd4d-3b2e-456c-afec-739df2a5e910","Type":"ContainerStarted","Data":"183b14b3f0f251597c5eee8bd052ad374fea7ca89988627b65c901cc3a3efbb3"} Jan 31 15:15:00 crc kubenswrapper[4763]: I0131 15:15:00.942962 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh" podStartSLOduration=0.942938803 podStartE2EDuration="942.938803ms" podCreationTimestamp="2026-01-31 15:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:15:00.941866975 +0000 UTC m=+1220.696605288" watchObservedRunningTime="2026-01-31 15:15:00.942938803 +0000 UTC m=+1220.697677096" Jan 31 15:15:01 crc kubenswrapper[4763]: I0131 15:15:01.934288 4763 generic.go:334] "Generic (PLEG): container finished" podID="e7b6cd4d-3b2e-456c-afec-739df2a5e910" containerID="2b0b52b31674e8c05c098868af13d05d526cd753e9a261dde704ad7aa24d9b2d" exitCode=0 Jan 31 15:15:01 crc kubenswrapper[4763]: I0131 15:15:01.934339 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh" event={"ID":"e7b6cd4d-3b2e-456c-afec-739df2a5e910","Type":"ContainerDied","Data":"2b0b52b31674e8c05c098868af13d05d526cd753e9a261dde704ad7aa24d9b2d"} Jan 31 15:15:02 crc kubenswrapper[4763]: I0131 15:15:02.947584 4763 generic.go:334] "Generic (PLEG): container finished" podID="9fa1aa7a-aa74-4771-a74a-51c73cd37867" containerID="dc43ea79353044d825fa92732b826ecb7bb9d81d79e11fde2b7c3d0258701fc2" exitCode=0 Jan 31 15:15:02 crc kubenswrapper[4763]: I0131 15:15:02.947689 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" event={"ID":"9fa1aa7a-aa74-4771-a74a-51c73cd37867","Type":"ContainerDied","Data":"dc43ea79353044d825fa92732b826ecb7bb9d81d79e11fde2b7c3d0258701fc2"} Jan 31 15:15:03 crc kubenswrapper[4763]: I0131 15:15:03.271909 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh" Jan 31 15:15:03 crc kubenswrapper[4763]: I0131 15:15:03.394935 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7b6cd4d-3b2e-456c-afec-739df2a5e910-config-volume\") pod \"e7b6cd4d-3b2e-456c-afec-739df2a5e910\" (UID: \"e7b6cd4d-3b2e-456c-afec-739df2a5e910\") " Jan 31 15:15:03 crc kubenswrapper[4763]: I0131 15:15:03.395023 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e7b6cd4d-3b2e-456c-afec-739df2a5e910-secret-volume\") pod \"e7b6cd4d-3b2e-456c-afec-739df2a5e910\" (UID: \"e7b6cd4d-3b2e-456c-afec-739df2a5e910\") " Jan 31 15:15:03 crc kubenswrapper[4763]: I0131 15:15:03.395271 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59lfh\" (UniqueName: \"kubernetes.io/projected/e7b6cd4d-3b2e-456c-afec-739df2a5e910-kube-api-access-59lfh\") pod \"e7b6cd4d-3b2e-456c-afec-739df2a5e910\" (UID: \"e7b6cd4d-3b2e-456c-afec-739df2a5e910\") " Jan 31 15:15:03 crc kubenswrapper[4763]: I0131 15:15:03.395653 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7b6cd4d-3b2e-456c-afec-739df2a5e910-config-volume" (OuterVolumeSpecName: "config-volume") pod "e7b6cd4d-3b2e-456c-afec-739df2a5e910" (UID: "e7b6cd4d-3b2e-456c-afec-739df2a5e910"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:03 crc kubenswrapper[4763]: I0131 15:15:03.395861 4763 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7b6cd4d-3b2e-456c-afec-739df2a5e910-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:03 crc kubenswrapper[4763]: I0131 15:15:03.401544 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7b6cd4d-3b2e-456c-afec-739df2a5e910-kube-api-access-59lfh" (OuterVolumeSpecName: "kube-api-access-59lfh") pod "e7b6cd4d-3b2e-456c-afec-739df2a5e910" (UID: "e7b6cd4d-3b2e-456c-afec-739df2a5e910"). InnerVolumeSpecName "kube-api-access-59lfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:15:03 crc kubenswrapper[4763]: I0131 15:15:03.401959 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7b6cd4d-3b2e-456c-afec-739df2a5e910-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e7b6cd4d-3b2e-456c-afec-739df2a5e910" (UID: "e7b6cd4d-3b2e-456c-afec-739df2a5e910"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:03 crc kubenswrapper[4763]: I0131 15:15:03.496571 4763 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e7b6cd4d-3b2e-456c-afec-739df2a5e910-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:03 crc kubenswrapper[4763]: I0131 15:15:03.496610 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59lfh\" (UniqueName: \"kubernetes.io/projected/e7b6cd4d-3b2e-456c-afec-739df2a5e910-kube-api-access-59lfh\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:03 crc kubenswrapper[4763]: I0131 15:15:03.961599 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh" Jan 31 15:15:03 crc kubenswrapper[4763]: I0131 15:15:03.961616 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh" event={"ID":"e7b6cd4d-3b2e-456c-afec-739df2a5e910","Type":"ContainerDied","Data":"183b14b3f0f251597c5eee8bd052ad374fea7ca89988627b65c901cc3a3efbb3"} Jan 31 15:15:03 crc kubenswrapper[4763]: I0131 15:15:03.961669 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="183b14b3f0f251597c5eee8bd052ad374fea7ca89988627b65c901cc3a3efbb3" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.347377 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.396896 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq"] Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.407311 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq"] Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.515013 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvg4b\" (UniqueName: \"kubernetes.io/projected/9fa1aa7a-aa74-4771-a74a-51c73cd37867-kube-api-access-dvg4b\") pod \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.515393 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9fa1aa7a-aa74-4771-a74a-51c73cd37867-ring-data-devices\") pod \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.515448 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9fa1aa7a-aa74-4771-a74a-51c73cd37867-scripts\") pod \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.515511 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9fa1aa7a-aa74-4771-a74a-51c73cd37867-swiftconf\") pod \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.515595 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9fa1aa7a-aa74-4771-a74a-51c73cd37867-etc-swift\") pod \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.515654 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9fa1aa7a-aa74-4771-a74a-51c73cd37867-dispersionconf\") pod \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.517003 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fa1aa7a-aa74-4771-a74a-51c73cd37867-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "9fa1aa7a-aa74-4771-a74a-51c73cd37867" (UID: "9fa1aa7a-aa74-4771-a74a-51c73cd37867"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.517320 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fa1aa7a-aa74-4771-a74a-51c73cd37867-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "9fa1aa7a-aa74-4771-a74a-51c73cd37867" (UID: "9fa1aa7a-aa74-4771-a74a-51c73cd37867"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.520859 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fa1aa7a-aa74-4771-a74a-51c73cd37867-kube-api-access-dvg4b" (OuterVolumeSpecName: "kube-api-access-dvg4b") pod "9fa1aa7a-aa74-4771-a74a-51c73cd37867" (UID: "9fa1aa7a-aa74-4771-a74a-51c73cd37867"). InnerVolumeSpecName "kube-api-access-dvg4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.542257 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa1aa7a-aa74-4771-a74a-51c73cd37867-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "9fa1aa7a-aa74-4771-a74a-51c73cd37867" (UID: "9fa1aa7a-aa74-4771-a74a-51c73cd37867"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.542809 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa1aa7a-aa74-4771-a74a-51c73cd37867-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "9fa1aa7a-aa74-4771-a74a-51c73cd37867" (UID: "9fa1aa7a-aa74-4771-a74a-51c73cd37867"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.548262 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fa1aa7a-aa74-4771-a74a-51c73cd37867-scripts" (OuterVolumeSpecName: "scripts") pod "9fa1aa7a-aa74-4771-a74a-51c73cd37867" (UID: "9fa1aa7a-aa74-4771-a74a-51c73cd37867"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.589898 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg"] Jan 31 15:15:04 crc kubenswrapper[4763]: E0131 15:15:04.590225 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7b6cd4d-3b2e-456c-afec-739df2a5e910" containerName="collect-profiles" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.590239 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7b6cd4d-3b2e-456c-afec-739df2a5e910" containerName="collect-profiles" Jan 31 15:15:04 crc kubenswrapper[4763]: E0131 15:15:04.590254 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa1aa7a-aa74-4771-a74a-51c73cd37867" containerName="swift-ring-rebalance" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.590262 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa1aa7a-aa74-4771-a74a-51c73cd37867" containerName="swift-ring-rebalance" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.590402 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7b6cd4d-3b2e-456c-afec-739df2a5e910" containerName="collect-profiles" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.590422 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fa1aa7a-aa74-4771-a74a-51c73cd37867" containerName="swift-ring-rebalance" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.590953 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.608330 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg"] Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.618372 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9fa1aa7a-aa74-4771-a74a-51c73cd37867-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.618414 4763 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9fa1aa7a-aa74-4771-a74a-51c73cd37867-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.618436 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvg4b\" (UniqueName: \"kubernetes.io/projected/9fa1aa7a-aa74-4771-a74a-51c73cd37867-kube-api-access-dvg4b\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.618457 4763 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9fa1aa7a-aa74-4771-a74a-51c73cd37867-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.618478 4763 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9fa1aa7a-aa74-4771-a74a-51c73cd37867-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.618496 4763 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9fa1aa7a-aa74-4771-a74a-51c73cd37867-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.719645 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4953b18b-b3bb-490c-8992-eef7307fdd9d-swiftconf\") pod \"swift-ring-rebalance-debug-4x4rg\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.719763 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4953b18b-b3bb-490c-8992-eef7307fdd9d-dispersionconf\") pod \"swift-ring-rebalance-debug-4x4rg\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.719836 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4953b18b-b3bb-490c-8992-eef7307fdd9d-scripts\") pod \"swift-ring-rebalance-debug-4x4rg\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.719872 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4953b18b-b3bb-490c-8992-eef7307fdd9d-etc-swift\") pod \"swift-ring-rebalance-debug-4x4rg\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.720018 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqxgl\" (UniqueName: \"kubernetes.io/projected/4953b18b-b3bb-490c-8992-eef7307fdd9d-kube-api-access-nqxgl\") pod \"swift-ring-rebalance-debug-4x4rg\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.720041 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4953b18b-b3bb-490c-8992-eef7307fdd9d-ring-data-devices\") pod \"swift-ring-rebalance-debug-4x4rg\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.822172 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4953b18b-b3bb-490c-8992-eef7307fdd9d-dispersionconf\") pod \"swift-ring-rebalance-debug-4x4rg\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.822346 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4953b18b-b3bb-490c-8992-eef7307fdd9d-scripts\") pod \"swift-ring-rebalance-debug-4x4rg\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.822412 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4953b18b-b3bb-490c-8992-eef7307fdd9d-etc-swift\") pod \"swift-ring-rebalance-debug-4x4rg\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.822538 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqxgl\" (UniqueName: \"kubernetes.io/projected/4953b18b-b3bb-490c-8992-eef7307fdd9d-kube-api-access-nqxgl\") pod \"swift-ring-rebalance-debug-4x4rg\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.822603 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4953b18b-b3bb-490c-8992-eef7307fdd9d-ring-data-devices\") pod \"swift-ring-rebalance-debug-4x4rg\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.822672 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4953b18b-b3bb-490c-8992-eef7307fdd9d-swiftconf\") pod \"swift-ring-rebalance-debug-4x4rg\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.822935 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4953b18b-b3bb-490c-8992-eef7307fdd9d-etc-swift\") pod \"swift-ring-rebalance-debug-4x4rg\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.823652 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4953b18b-b3bb-490c-8992-eef7307fdd9d-scripts\") pod \"swift-ring-rebalance-debug-4x4rg\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.823929 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4953b18b-b3bb-490c-8992-eef7307fdd9d-ring-data-devices\") pod \"swift-ring-rebalance-debug-4x4rg\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.830648 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4953b18b-b3bb-490c-8992-eef7307fdd9d-dispersionconf\") pod \"swift-ring-rebalance-debug-4x4rg\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.830800 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4953b18b-b3bb-490c-8992-eef7307fdd9d-swiftconf\") pod \"swift-ring-rebalance-debug-4x4rg\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.838566 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqxgl\" (UniqueName: \"kubernetes.io/projected/4953b18b-b3bb-490c-8992-eef7307fdd9d-kube-api-access-nqxgl\") pod \"swift-ring-rebalance-debug-4x4rg\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.924567 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.971245 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="873824f34424ae78088f76f80ab4ed2704a83af62bd0bddc461bf62e64ee83f7" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.971424 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:15:05 crc kubenswrapper[4763]: I0131 15:15:05.062554 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fa1aa7a-aa74-4771-a74a-51c73cd37867" path="/var/lib/kubelet/pods/9fa1aa7a-aa74-4771-a74a-51c73cd37867/volumes" Jan 31 15:15:05 crc kubenswrapper[4763]: I0131 15:15:05.161156 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg"] Jan 31 15:15:05 crc kubenswrapper[4763]: W0131 15:15:05.165989 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4953b18b_b3bb_490c_8992_eef7307fdd9d.slice/crio-27ad12e9d73808eeb265a782edb8eb6db4ef1efbb117fc542c3862e2d28c7f3d WatchSource:0}: Error finding container 27ad12e9d73808eeb265a782edb8eb6db4ef1efbb117fc542c3862e2d28c7f3d: Status 404 returned error can't find the container with id 27ad12e9d73808eeb265a782edb8eb6db4ef1efbb117fc542c3862e2d28c7f3d Jan 31 15:15:05 crc kubenswrapper[4763]: I0131 15:15:05.982197 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" event={"ID":"4953b18b-b3bb-490c-8992-eef7307fdd9d","Type":"ContainerStarted","Data":"8180dd910ac6efc8ad5f3d6dcb80cc45662cc7fb7c88809c730b03aa35ba8bc3"} Jan 31 15:15:05 crc kubenswrapper[4763]: I0131 15:15:05.982488 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" event={"ID":"4953b18b-b3bb-490c-8992-eef7307fdd9d","Type":"ContainerStarted","Data":"27ad12e9d73808eeb265a782edb8eb6db4ef1efbb117fc542c3862e2d28c7f3d"} Jan 31 15:15:06 crc kubenswrapper[4763]: I0131 15:15:06.995217 4763 generic.go:334] "Generic (PLEG): container finished" podID="4953b18b-b3bb-490c-8992-eef7307fdd9d" containerID="8180dd910ac6efc8ad5f3d6dcb80cc45662cc7fb7c88809c730b03aa35ba8bc3" exitCode=0 Jan 31 15:15:06 crc kubenswrapper[4763]: I0131 15:15:06.995330 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" event={"ID":"4953b18b-b3bb-490c-8992-eef7307fdd9d","Type":"ContainerDied","Data":"8180dd910ac6efc8ad5f3d6dcb80cc45662cc7fb7c88809c730b03aa35ba8bc3"} Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.293680 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.321215 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg"] Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.326097 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg"] Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.475733 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4953b18b-b3bb-490c-8992-eef7307fdd9d-dispersionconf\") pod \"4953b18b-b3bb-490c-8992-eef7307fdd9d\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.476042 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4953b18b-b3bb-490c-8992-eef7307fdd9d-etc-swift\") pod \"4953b18b-b3bb-490c-8992-eef7307fdd9d\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.476072 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4953b18b-b3bb-490c-8992-eef7307fdd9d-scripts\") pod \"4953b18b-b3bb-490c-8992-eef7307fdd9d\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.476159 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4953b18b-b3bb-490c-8992-eef7307fdd9d-ring-data-devices\") pod \"4953b18b-b3bb-490c-8992-eef7307fdd9d\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.476179 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqxgl\" (UniqueName: \"kubernetes.io/projected/4953b18b-b3bb-490c-8992-eef7307fdd9d-kube-api-access-nqxgl\") pod \"4953b18b-b3bb-490c-8992-eef7307fdd9d\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.476248 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4953b18b-b3bb-490c-8992-eef7307fdd9d-swiftconf\") pod \"4953b18b-b3bb-490c-8992-eef7307fdd9d\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.477934 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4953b18b-b3bb-490c-8992-eef7307fdd9d-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "4953b18b-b3bb-490c-8992-eef7307fdd9d" (UID: "4953b18b-b3bb-490c-8992-eef7307fdd9d"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.478277 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4953b18b-b3bb-490c-8992-eef7307fdd9d-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "4953b18b-b3bb-490c-8992-eef7307fdd9d" (UID: "4953b18b-b3bb-490c-8992-eef7307fdd9d"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.484828 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4953b18b-b3bb-490c-8992-eef7307fdd9d-kube-api-access-nqxgl" (OuterVolumeSpecName: "kube-api-access-nqxgl") pod "4953b18b-b3bb-490c-8992-eef7307fdd9d" (UID: "4953b18b-b3bb-490c-8992-eef7307fdd9d"). InnerVolumeSpecName "kube-api-access-nqxgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.496103 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4953b18b-b3bb-490c-8992-eef7307fdd9d-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "4953b18b-b3bb-490c-8992-eef7307fdd9d" (UID: "4953b18b-b3bb-490c-8992-eef7307fdd9d"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.498142 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4953b18b-b3bb-490c-8992-eef7307fdd9d-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "4953b18b-b3bb-490c-8992-eef7307fdd9d" (UID: "4953b18b-b3bb-490c-8992-eef7307fdd9d"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.500050 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4953b18b-b3bb-490c-8992-eef7307fdd9d-scripts" (OuterVolumeSpecName: "scripts") pod "4953b18b-b3bb-490c-8992-eef7307fdd9d" (UID: "4953b18b-b3bb-490c-8992-eef7307fdd9d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.577919 4763 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4953b18b-b3bb-490c-8992-eef7307fdd9d-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.578207 4763 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4953b18b-b3bb-490c-8992-eef7307fdd9d-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.578284 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4953b18b-b3bb-490c-8992-eef7307fdd9d-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.578342 4763 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4953b18b-b3bb-490c-8992-eef7307fdd9d-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.578397 4763 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4953b18b-b3bb-490c-8992-eef7307fdd9d-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.578459 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqxgl\" (UniqueName: \"kubernetes.io/projected/4953b18b-b3bb-490c-8992-eef7307fdd9d-kube-api-access-nqxgl\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:09 crc kubenswrapper[4763]: I0131 15:15:09.019729 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27ad12e9d73808eeb265a782edb8eb6db4ef1efbb117fc542c3862e2d28c7f3d" Jan 31 15:15:09 crc kubenswrapper[4763]: I0131 15:15:09.019835 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:09 crc kubenswrapper[4763]: I0131 15:15:09.053963 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4953b18b-b3bb-490c-8992-eef7307fdd9d" path="/var/lib/kubelet/pods/4953b18b-b3bb-490c-8992-eef7307fdd9d/volumes" Jan 31 15:15:10 crc kubenswrapper[4763]: I0131 15:15:10.967586 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r"] Jan 31 15:15:10 crc kubenswrapper[4763]: E0131 15:15:10.968283 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4953b18b-b3bb-490c-8992-eef7307fdd9d" containerName="swift-ring-rebalance" Jan 31 15:15:10 crc kubenswrapper[4763]: I0131 15:15:10.968326 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="4953b18b-b3bb-490c-8992-eef7307fdd9d" containerName="swift-ring-rebalance" Jan 31 15:15:10 crc kubenswrapper[4763]: I0131 15:15:10.968521 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="4953b18b-b3bb-490c-8992-eef7307fdd9d" containerName="swift-ring-rebalance" Jan 31 15:15:10 crc kubenswrapper[4763]: I0131 15:15:10.969269 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:10 crc kubenswrapper[4763]: I0131 15:15:10.972193 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 15:15:10 crc kubenswrapper[4763]: I0131 15:15:10.972749 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 15:15:10 crc kubenswrapper[4763]: I0131 15:15:10.983858 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r"] Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.118052 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smxfc\" (UniqueName: \"kubernetes.io/projected/9e462269-9089-4bbd-a58c-1b0667972de9-kube-api-access-smxfc\") pod \"swift-ring-rebalance-debug-vlp8r\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.118100 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9e462269-9089-4bbd-a58c-1b0667972de9-etc-swift\") pod \"swift-ring-rebalance-debug-vlp8r\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.118123 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9e462269-9089-4bbd-a58c-1b0667972de9-swiftconf\") pod \"swift-ring-rebalance-debug-vlp8r\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.118167 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9e462269-9089-4bbd-a58c-1b0667972de9-ring-data-devices\") pod \"swift-ring-rebalance-debug-vlp8r\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.118194 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e462269-9089-4bbd-a58c-1b0667972de9-scripts\") pod \"swift-ring-rebalance-debug-vlp8r\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.118493 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9e462269-9089-4bbd-a58c-1b0667972de9-dispersionconf\") pod \"swift-ring-rebalance-debug-vlp8r\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.219881 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9e462269-9089-4bbd-a58c-1b0667972de9-ring-data-devices\") pod \"swift-ring-rebalance-debug-vlp8r\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.219992 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e462269-9089-4bbd-a58c-1b0667972de9-scripts\") pod \"swift-ring-rebalance-debug-vlp8r\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.220148 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9e462269-9089-4bbd-a58c-1b0667972de9-dispersionconf\") pod \"swift-ring-rebalance-debug-vlp8r\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.220267 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smxfc\" (UniqueName: \"kubernetes.io/projected/9e462269-9089-4bbd-a58c-1b0667972de9-kube-api-access-smxfc\") pod \"swift-ring-rebalance-debug-vlp8r\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.220321 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9e462269-9089-4bbd-a58c-1b0667972de9-etc-swift\") pod \"swift-ring-rebalance-debug-vlp8r\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.220409 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9e462269-9089-4bbd-a58c-1b0667972de9-swiftconf\") pod \"swift-ring-rebalance-debug-vlp8r\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.221541 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9e462269-9089-4bbd-a58c-1b0667972de9-etc-swift\") pod \"swift-ring-rebalance-debug-vlp8r\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.221770 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9e462269-9089-4bbd-a58c-1b0667972de9-ring-data-devices\") pod \"swift-ring-rebalance-debug-vlp8r\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.222245 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e462269-9089-4bbd-a58c-1b0667972de9-scripts\") pod \"swift-ring-rebalance-debug-vlp8r\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.228121 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9e462269-9089-4bbd-a58c-1b0667972de9-swiftconf\") pod \"swift-ring-rebalance-debug-vlp8r\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.229042 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9e462269-9089-4bbd-a58c-1b0667972de9-dispersionconf\") pod \"swift-ring-rebalance-debug-vlp8r\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.253749 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smxfc\" (UniqueName: \"kubernetes.io/projected/9e462269-9089-4bbd-a58c-1b0667972de9-kube-api-access-smxfc\") pod \"swift-ring-rebalance-debug-vlp8r\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.301417 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.813023 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r"] Jan 31 15:15:11 crc kubenswrapper[4763]: W0131 15:15:11.820675 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e462269_9089_4bbd_a58c_1b0667972de9.slice/crio-e69934d7b524a70aa6f64ea6649babfe469e02cb0e6edef1ab0d95dadff6436a WatchSource:0}: Error finding container e69934d7b524a70aa6f64ea6649babfe469e02cb0e6edef1ab0d95dadff6436a: Status 404 returned error can't find the container with id e69934d7b524a70aa6f64ea6649babfe469e02cb0e6edef1ab0d95dadff6436a Jan 31 15:15:12 crc kubenswrapper[4763]: I0131 15:15:12.050832 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" event={"ID":"9e462269-9089-4bbd-a58c-1b0667972de9","Type":"ContainerStarted","Data":"e69934d7b524a70aa6f64ea6649babfe469e02cb0e6edef1ab0d95dadff6436a"} Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.071925 4763 generic.go:334] "Generic (PLEG): container finished" podID="9e462269-9089-4bbd-a58c-1b0667972de9" containerID="df8f491bf5aacbab5a8630f25038d21050a63a1c4fbccf55da4b9be6e45da402" exitCode=0 Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.071999 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" event={"ID":"9e462269-9089-4bbd-a58c-1b0667972de9","Type":"ContainerDied","Data":"df8f491bf5aacbab5a8630f25038d21050a63a1c4fbccf55da4b9be6e45da402"} Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.124019 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r"] Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.132291 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r"] Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.237300 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.237783 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-replicator" containerID="cri-o://16d25b731fa680bf545bedd9ee8124c58230c5826aad06eaed21d44ebcb95b3f" gracePeriod=30 Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.237830 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-server" containerID="cri-o://eb52d0ca91a944d5c8ad2fa653c9fa6ee50c07649415a73ade8c5c4153e97392" gracePeriod=30 Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.237778 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="account-server" containerID="cri-o://9deafbdec521654222135fae4a75189b7d212a38e26a9e35528481bb48dfe25d" gracePeriod=30 Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.237912 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-updater" containerID="cri-o://ddb0e5002dc17cb138568e744262f32bbfd952fee6858962182e6dd4037f223b" gracePeriod=30 Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.237942 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-auditor" containerID="cri-o://38e47bd1962160574137a5d84c3fcba7de4d7e7d941eaa6fc85fc7d17c701c2b" gracePeriod=30 Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.237968 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-replicator" containerID="cri-o://7d80e2b71f4cf2dfe389248e41642348209b0bf63a38f66e95b3a2c6e7260234" gracePeriod=30 Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.238006 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-server" containerID="cri-o://0ed0e66180b91482a31099e92316c33c11bb10524bf629042109c378e3080682" gracePeriod=30 Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.238048 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="account-reaper" containerID="cri-o://485cbd77da69f3c2a508872f5ca6e889f79adf0023dc9705725e5bb72e9b68f0" gracePeriod=30 Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.238100 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="account-auditor" containerID="cri-o://42ec21618efaa5ddb06f0dc914d466c534df09064f84bd8dfd7d4e397b2993dc" gracePeriod=30 Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.238138 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="account-replicator" containerID="cri-o://ad6aed25c6e7195d02585e59f08a712946e5140839e590d00e1fd3b656f906e1" gracePeriod=30 Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.238227 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-sharder" containerID="cri-o://9c5b86851732a69e6e48096dd6f1b888f12fe142cae2d0bd8bef364d2e611b8e" gracePeriod=30 Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.238258 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="swift-recon-cron" containerID="cri-o://5af94547d6eae1fcceeb9f2af0d269373db899e6b1f77d785fa71bb533c4ff1e" gracePeriod=30 Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.238247 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-expirer" containerID="cri-o://66b1c2a9de2f47ccee33ed4084db9a99570169d12537c16e6734017f17b75f44" gracePeriod=30 Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.238309 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-auditor" containerID="cri-o://f11355f4c90e474a9eb7e8bffad1d97535b6d05264b819d2663c38b16a03b8b0" gracePeriod=30 Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.238292 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="rsync" containerID="cri-o://370769a37b25c50b9d2a6455c870de2ac9df87a2e7618e148b0d0f1c2ea87ea4" gracePeriod=30 Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.238375 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-updater" containerID="cri-o://6eded90931167b47c7a89da68469090108a9e35bce81085ea08e8704905bbdf6" gracePeriod=30 Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.265681 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-6fg48"] Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.282401 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-6fg48"] Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.294040 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6"] Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.294286 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" podUID="193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0" containerName="proxy-httpd" containerID="cri-o://c75bfd6add657441b6a71901eb789ee4a6824e5f58e98d0ae1677f3d7f9ec850" gracePeriod=30 Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.294594 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" podUID="193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0" containerName="proxy-server" containerID="cri-o://7408e446c82f19d8bca458d55cad69dd2743a440768ce18ecfc4933fcd0795d6" gracePeriod=30 Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.081619 4763 generic.go:334] "Generic (PLEG): container finished" podID="193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0" containerID="7408e446c82f19d8bca458d55cad69dd2743a440768ce18ecfc4933fcd0795d6" exitCode=0 Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.082811 4763 generic.go:334] "Generic (PLEG): container finished" podID="193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0" containerID="c75bfd6add657441b6a71901eb789ee4a6824e5f58e98d0ae1677f3d7f9ec850" exitCode=0 Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.081689 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" event={"ID":"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0","Type":"ContainerDied","Data":"7408e446c82f19d8bca458d55cad69dd2743a440768ce18ecfc4933fcd0795d6"} Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.082947 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" event={"ID":"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0","Type":"ContainerDied","Data":"c75bfd6add657441b6a71901eb789ee4a6824e5f58e98d0ae1677f3d7f9ec850"} Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.091856 4763 generic.go:334] "Generic (PLEG): container finished" podID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerID="9c5b86851732a69e6e48096dd6f1b888f12fe142cae2d0bd8bef364d2e611b8e" exitCode=0 Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.091897 4763 generic.go:334] "Generic (PLEG): container finished" podID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerID="370769a37b25c50b9d2a6455c870de2ac9df87a2e7618e148b0d0f1c2ea87ea4" exitCode=0 Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.091913 4763 generic.go:334] "Generic (PLEG): container finished" podID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerID="66b1c2a9de2f47ccee33ed4084db9a99570169d12537c16e6734017f17b75f44" exitCode=0 Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.091929 4763 generic.go:334] "Generic (PLEG): container finished" podID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerID="6eded90931167b47c7a89da68469090108a9e35bce81085ea08e8704905bbdf6" exitCode=0 Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.091957 4763 generic.go:334] "Generic (PLEG): container finished" podID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerID="f11355f4c90e474a9eb7e8bffad1d97535b6d05264b819d2663c38b16a03b8b0" exitCode=0 Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.091974 4763 generic.go:334] "Generic (PLEG): container finished" podID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerID="16d25b731fa680bf545bedd9ee8124c58230c5826aad06eaed21d44ebcb95b3f" exitCode=0 Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.091987 4763 generic.go:334] "Generic (PLEG): container finished" podID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerID="eb52d0ca91a944d5c8ad2fa653c9fa6ee50c07649415a73ade8c5c4153e97392" exitCode=0 Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092000 4763 generic.go:334] "Generic (PLEG): container finished" podID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerID="ddb0e5002dc17cb138568e744262f32bbfd952fee6858962182e6dd4037f223b" exitCode=0 Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092016 4763 generic.go:334] "Generic (PLEG): container finished" podID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerID="38e47bd1962160574137a5d84c3fcba7de4d7e7d941eaa6fc85fc7d17c701c2b" exitCode=0 Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.091919 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerDied","Data":"9c5b86851732a69e6e48096dd6f1b888f12fe142cae2d0bd8bef364d2e611b8e"} Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092088 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerDied","Data":"370769a37b25c50b9d2a6455c870de2ac9df87a2e7618e148b0d0f1c2ea87ea4"} Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092105 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerDied","Data":"66b1c2a9de2f47ccee33ed4084db9a99570169d12537c16e6734017f17b75f44"} Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092120 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerDied","Data":"6eded90931167b47c7a89da68469090108a9e35bce81085ea08e8704905bbdf6"} Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092028 4763 generic.go:334] "Generic (PLEG): container finished" podID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerID="7d80e2b71f4cf2dfe389248e41642348209b0bf63a38f66e95b3a2c6e7260234" exitCode=0 Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092176 4763 generic.go:334] "Generic (PLEG): container finished" podID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerID="0ed0e66180b91482a31099e92316c33c11bb10524bf629042109c378e3080682" exitCode=0 Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092205 4763 generic.go:334] "Generic (PLEG): container finished" podID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerID="485cbd77da69f3c2a508872f5ca6e889f79adf0023dc9705725e5bb72e9b68f0" exitCode=0 Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092136 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerDied","Data":"f11355f4c90e474a9eb7e8bffad1d97535b6d05264b819d2663c38b16a03b8b0"} Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092268 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerDied","Data":"16d25b731fa680bf545bedd9ee8124c58230c5826aad06eaed21d44ebcb95b3f"} Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092296 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerDied","Data":"eb52d0ca91a944d5c8ad2fa653c9fa6ee50c07649415a73ade8c5c4153e97392"} Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092317 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerDied","Data":"ddb0e5002dc17cb138568e744262f32bbfd952fee6858962182e6dd4037f223b"} Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092221 4763 generic.go:334] "Generic (PLEG): container finished" podID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerID="42ec21618efaa5ddb06f0dc914d466c534df09064f84bd8dfd7d4e397b2993dc" exitCode=0 Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092362 4763 generic.go:334] "Generic (PLEG): container finished" podID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerID="ad6aed25c6e7195d02585e59f08a712946e5140839e590d00e1fd3b656f906e1" exitCode=0 Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092381 4763 generic.go:334] "Generic (PLEG): container finished" podID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerID="9deafbdec521654222135fae4a75189b7d212a38e26a9e35528481bb48dfe25d" exitCode=0 Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092336 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerDied","Data":"38e47bd1962160574137a5d84c3fcba7de4d7e7d941eaa6fc85fc7d17c701c2b"} Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092447 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerDied","Data":"7d80e2b71f4cf2dfe389248e41642348209b0bf63a38f66e95b3a2c6e7260234"} Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092467 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerDied","Data":"0ed0e66180b91482a31099e92316c33c11bb10524bf629042109c378e3080682"} Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092482 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerDied","Data":"485cbd77da69f3c2a508872f5ca6e889f79adf0023dc9705725e5bb72e9b68f0"} Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092494 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerDied","Data":"42ec21618efaa5ddb06f0dc914d466c534df09064f84bd8dfd7d4e397b2993dc"} Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092506 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerDied","Data":"ad6aed25c6e7195d02585e59f08a712946e5140839e590d00e1fd3b656f906e1"} Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092518 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerDied","Data":"9deafbdec521654222135fae4a75189b7d212a38e26a9e35528481bb48dfe25d"} Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.527685 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.683175 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e462269-9089-4bbd-a58c-1b0667972de9-scripts\") pod \"9e462269-9089-4bbd-a58c-1b0667972de9\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.683536 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smxfc\" (UniqueName: \"kubernetes.io/projected/9e462269-9089-4bbd-a58c-1b0667972de9-kube-api-access-smxfc\") pod \"9e462269-9089-4bbd-a58c-1b0667972de9\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.683584 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9e462269-9089-4bbd-a58c-1b0667972de9-ring-data-devices\") pod \"9e462269-9089-4bbd-a58c-1b0667972de9\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.683627 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9e462269-9089-4bbd-a58c-1b0667972de9-swiftconf\") pod \"9e462269-9089-4bbd-a58c-1b0667972de9\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.683649 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9e462269-9089-4bbd-a58c-1b0667972de9-dispersionconf\") pod \"9e462269-9089-4bbd-a58c-1b0667972de9\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.683678 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9e462269-9089-4bbd-a58c-1b0667972de9-etc-swift\") pod \"9e462269-9089-4bbd-a58c-1b0667972de9\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.684342 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e462269-9089-4bbd-a58c-1b0667972de9-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "9e462269-9089-4bbd-a58c-1b0667972de9" (UID: "9e462269-9089-4bbd-a58c-1b0667972de9"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.684382 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e462269-9089-4bbd-a58c-1b0667972de9-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "9e462269-9089-4bbd-a58c-1b0667972de9" (UID: "9e462269-9089-4bbd-a58c-1b0667972de9"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.691057 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e462269-9089-4bbd-a58c-1b0667972de9-kube-api-access-smxfc" (OuterVolumeSpecName: "kube-api-access-smxfc") pod "9e462269-9089-4bbd-a58c-1b0667972de9" (UID: "9e462269-9089-4bbd-a58c-1b0667972de9"). InnerVolumeSpecName "kube-api-access-smxfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.703443 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e462269-9089-4bbd-a58c-1b0667972de9-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "9e462269-9089-4bbd-a58c-1b0667972de9" (UID: "9e462269-9089-4bbd-a58c-1b0667972de9"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.703665 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e462269-9089-4bbd-a58c-1b0667972de9-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "9e462269-9089-4bbd-a58c-1b0667972de9" (UID: "9e462269-9089-4bbd-a58c-1b0667972de9"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.715875 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e462269-9089-4bbd-a58c-1b0667972de9-scripts" (OuterVolumeSpecName: "scripts") pod "9e462269-9089-4bbd-a58c-1b0667972de9" (UID: "9e462269-9089-4bbd-a58c-1b0667972de9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.731382 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.785783 4763 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e462269-9089-4bbd-a58c-1b0667972de9-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.785815 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smxfc\" (UniqueName: \"kubernetes.io/projected/9e462269-9089-4bbd-a58c-1b0667972de9-kube-api-access-smxfc\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.785826 4763 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9e462269-9089-4bbd-a58c-1b0667972de9-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.785834 4763 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9e462269-9089-4bbd-a58c-1b0667972de9-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.785843 4763 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9e462269-9089-4bbd-a58c-1b0667972de9-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.785850 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9e462269-9089-4bbd-a58c-1b0667972de9-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.886311 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-log-httpd\") pod \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.886387 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-run-httpd\") pod \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.886419 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-etc-swift\") pod \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.886438 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-config-data\") pod \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.886458 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wzbf\" (UniqueName: \"kubernetes.io/projected/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-kube-api-access-7wzbf\") pod \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.886759 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0" (UID: "193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.887343 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0" (UID: "193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.890309 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0" (UID: "193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.891033 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-kube-api-access-7wzbf" (OuterVolumeSpecName: "kube-api-access-7wzbf") pod "193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0" (UID: "193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0"). InnerVolumeSpecName "kube-api-access-7wzbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.916430 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-config-data" (OuterVolumeSpecName: "config-data") pod "193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0" (UID: "193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.988521 4763 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.988555 4763 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.988567 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.988579 4763 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.988591 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wzbf\" (UniqueName: \"kubernetes.io/projected/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-kube-api-access-7wzbf\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:15 crc kubenswrapper[4763]: I0131 15:15:15.049576 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52d85cde-9969-46d1-9e16-44c5747493cc" path="/var/lib/kubelet/pods/52d85cde-9969-46d1-9e16-44c5747493cc/volumes" Jan 31 15:15:15 crc kubenswrapper[4763]: I0131 15:15:15.050354 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e462269-9089-4bbd-a58c-1b0667972de9" path="/var/lib/kubelet/pods/9e462269-9089-4bbd-a58c-1b0667972de9/volumes" Jan 31 15:15:15 crc kubenswrapper[4763]: I0131 15:15:15.102583 4763 scope.go:117] "RemoveContainer" containerID="df8f491bf5aacbab5a8630f25038d21050a63a1c4fbccf55da4b9be6e45da402" Jan 31 15:15:15 crc kubenswrapper[4763]: I0131 15:15:15.102585 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:15 crc kubenswrapper[4763]: I0131 15:15:15.106006 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" event={"ID":"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0","Type":"ContainerDied","Data":"b119bcbbd6a3db6b88bff647faeec6bfa2e07015bb14d46d3cb45d7d5768e7de"} Jan 31 15:15:15 crc kubenswrapper[4763]: I0131 15:15:15.106084 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:15:15 crc kubenswrapper[4763]: I0131 15:15:15.137665 4763 scope.go:117] "RemoveContainer" containerID="7408e446c82f19d8bca458d55cad69dd2743a440768ce18ecfc4933fcd0795d6" Jan 31 15:15:15 crc kubenswrapper[4763]: I0131 15:15:15.137867 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6"] Jan 31 15:15:15 crc kubenswrapper[4763]: I0131 15:15:15.144469 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6"] Jan 31 15:15:15 crc kubenswrapper[4763]: I0131 15:15:15.155752 4763 scope.go:117] "RemoveContainer" containerID="c75bfd6add657441b6a71901eb789ee4a6824e5f58e98d0ae1677f3d7f9ec850" Jan 31 15:15:17 crc kubenswrapper[4763]: I0131 15:15:17.056211 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0" path="/var/lib/kubelet/pods/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0/volumes" Jan 31 15:15:41 crc kubenswrapper[4763]: I0131 15:15:41.683100 4763 scope.go:117] "RemoveContainer" containerID="23e3fe9f7b231da52853b276574f18bf79feb38b42567138be71d5b85f526157" Jan 31 15:15:41 crc kubenswrapper[4763]: I0131 15:15:41.709430 4763 scope.go:117] "RemoveContainer" containerID="0c5a179d917112c47df3d672325ac30e6e4efd61885f9377b2ea3e10d6c629b4" Jan 31 15:15:41 crc kubenswrapper[4763]: I0131 15:15:41.741813 4763 scope.go:117] "RemoveContainer" containerID="58e640168ef1b75e853e394649bc966d1036d4bb11ab8918c809f9ee7dee4196" Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.433346 4763 generic.go:334] "Generic (PLEG): container finished" podID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerID="5af94547d6eae1fcceeb9f2af0d269373db899e6b1f77d785fa71bb533c4ff1e" exitCode=137 Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.433461 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerDied","Data":"5af94547d6eae1fcceeb9f2af0d269373db899e6b1f77d785fa71bb533c4ff1e"} Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.670473 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.758680 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pd4v2\" (UniqueName: \"kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-kube-api-access-pd4v2\") pod \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.758768 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.758852 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0f637cde-45f1-4c1e-b345-7f89e17eccc6-lock\") pod \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.758883 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift\") pod \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.758998 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0f637cde-45f1-4c1e-b345-7f89e17eccc6-cache\") pod \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.761919 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f637cde-45f1-4c1e-b345-7f89e17eccc6-lock" (OuterVolumeSpecName: "lock") pod "0f637cde-45f1-4c1e-b345-7f89e17eccc6" (UID: "0f637cde-45f1-4c1e-b345-7f89e17eccc6"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.761964 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f637cde-45f1-4c1e-b345-7f89e17eccc6-cache" (OuterVolumeSpecName: "cache") pod "0f637cde-45f1-4c1e-b345-7f89e17eccc6" (UID: "0f637cde-45f1-4c1e-b345-7f89e17eccc6"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.767518 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-kube-api-access-pd4v2" (OuterVolumeSpecName: "kube-api-access-pd4v2") pod "0f637cde-45f1-4c1e-b345-7f89e17eccc6" (UID: "0f637cde-45f1-4c1e-b345-7f89e17eccc6"). InnerVolumeSpecName "kube-api-access-pd4v2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.767795 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "0f637cde-45f1-4c1e-b345-7f89e17eccc6" (UID: "0f637cde-45f1-4c1e-b345-7f89e17eccc6"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.767898 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "swift") pod "0f637cde-45f1-4c1e-b345-7f89e17eccc6" (UID: "0f637cde-45f1-4c1e-b345-7f89e17eccc6"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.860990 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pd4v2\" (UniqueName: \"kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-kube-api-access-pd4v2\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.861041 4763 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.861051 4763 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0f637cde-45f1-4c1e-b345-7f89e17eccc6-lock\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.861060 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.861069 4763 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0f637cde-45f1-4c1e-b345-7f89e17eccc6-cache\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.872636 4763 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.962606 4763 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.177785 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.177845 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.445778 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerDied","Data":"6e2e9e90507c6c3f151001d225cf3ffd2d74fbb47248c3805eb15d884a5abba9"} Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.445839 4763 scope.go:117] "RemoveContainer" containerID="9c5b86851732a69e6e48096dd6f1b888f12fe142cae2d0bd8bef364d2e611b8e" Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.445862 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.462094 4763 scope.go:117] "RemoveContainer" containerID="5af94547d6eae1fcceeb9f2af0d269373db899e6b1f77d785fa71bb533c4ff1e" Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.483490 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.484346 4763 scope.go:117] "RemoveContainer" containerID="370769a37b25c50b9d2a6455c870de2ac9df87a2e7618e148b0d0f1c2ea87ea4" Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.491418 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.505539 4763 scope.go:117] "RemoveContainer" containerID="66b1c2a9de2f47ccee33ed4084db9a99570169d12537c16e6734017f17b75f44" Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.530883 4763 scope.go:117] "RemoveContainer" containerID="6eded90931167b47c7a89da68469090108a9e35bce81085ea08e8704905bbdf6" Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.553643 4763 scope.go:117] "RemoveContainer" containerID="f11355f4c90e474a9eb7e8bffad1d97535b6d05264b819d2663c38b16a03b8b0" Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.574100 4763 scope.go:117] "RemoveContainer" containerID="16d25b731fa680bf545bedd9ee8124c58230c5826aad06eaed21d44ebcb95b3f" Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.593194 4763 scope.go:117] "RemoveContainer" containerID="eb52d0ca91a944d5c8ad2fa653c9fa6ee50c07649415a73ade8c5c4153e97392" Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.613199 4763 scope.go:117] "RemoveContainer" containerID="ddb0e5002dc17cb138568e744262f32bbfd952fee6858962182e6dd4037f223b" Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.629583 4763 scope.go:117] "RemoveContainer" containerID="38e47bd1962160574137a5d84c3fcba7de4d7e7d941eaa6fc85fc7d17c701c2b" Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.647382 4763 scope.go:117] "RemoveContainer" containerID="7d80e2b71f4cf2dfe389248e41642348209b0bf63a38f66e95b3a2c6e7260234" Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.665292 4763 scope.go:117] "RemoveContainer" containerID="0ed0e66180b91482a31099e92316c33c11bb10524bf629042109c378e3080682" Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.684112 4763 scope.go:117] "RemoveContainer" containerID="485cbd77da69f3c2a508872f5ca6e889f79adf0023dc9705725e5bb72e9b68f0" Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.702564 4763 scope.go:117] "RemoveContainer" containerID="42ec21618efaa5ddb06f0dc914d466c534df09064f84bd8dfd7d4e397b2993dc" Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.716850 4763 scope.go:117] "RemoveContainer" containerID="ad6aed25c6e7195d02585e59f08a712946e5140839e590d00e1fd3b656f906e1" Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.729904 4763 scope.go:117] "RemoveContainer" containerID="9deafbdec521654222135fae4a75189b7d212a38e26a9e35528481bb48dfe25d" Jan 31 15:15:45 crc kubenswrapper[4763]: I0131 15:15:45.051832 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" path="/var/lib/kubelet/pods/0f637cde-45f1-4c1e-b345-7f89e17eccc6/volumes" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.627513 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.627817 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-server" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.627832 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-server" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.627845 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-expirer" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.627852 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-expirer" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.627865 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="account-replicator" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.627875 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="account-replicator" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.627888 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="account-auditor" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.627895 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="account-auditor" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.627906 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-server" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.627913 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-server" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.627925 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-auditor" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.627933 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-auditor" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.627943 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0" containerName="proxy-httpd" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.627950 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0" containerName="proxy-httpd" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.627961 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-auditor" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.627969 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-auditor" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.627981 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-sharder" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.627988 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-sharder" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.628002 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="account-reaper" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628010 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="account-reaper" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.628020 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-replicator" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628028 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-replicator" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.628039 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0" containerName="proxy-server" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628046 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0" containerName="proxy-server" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.628057 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-replicator" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628066 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-replicator" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.628084 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="swift-recon-cron" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628092 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="swift-recon-cron" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.628104 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="account-server" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628112 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="account-server" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.628132 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="rsync" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628141 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="rsync" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.628149 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e462269-9089-4bbd-a58c-1b0667972de9" containerName="swift-ring-rebalance" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628158 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e462269-9089-4bbd-a58c-1b0667972de9" containerName="swift-ring-rebalance" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.628171 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-updater" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628178 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-updater" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.628190 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-updater" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628198 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-updater" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628357 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="account-replicator" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628370 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-updater" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628383 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-server" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628394 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e462269-9089-4bbd-a58c-1b0667972de9" containerName="swift-ring-rebalance" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628404 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-auditor" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628416 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-auditor" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628424 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-replicator" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628433 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="account-auditor" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628441 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-server" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628451 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="rsync" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628463 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0" containerName="proxy-httpd" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628474 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-sharder" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628483 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="swift-recon-cron" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628493 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0" containerName="proxy-server" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628504 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="account-reaper" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628516 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="account-server" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628527 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-replicator" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628537 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-expirer" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628548 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-updater" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.633001 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.634751 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-storage-config-data" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.634955 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-conf" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.635466 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-swift-dockercfg-29kdk" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.636109 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-files" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.654103 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.659202 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.671976 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.689749 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.697100 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.700803 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.720788 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.801365 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.801412 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9hz8\" (UniqueName: \"kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-kube-api-access-n9hz8\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.801429 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-cache\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.801455 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.801479 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.801493 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-lock\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.801514 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.801533 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-cache\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.801548 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r45b\" (UniqueName: \"kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-kube-api-access-8r45b\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.801565 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp55l\" (UniqueName: \"kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-kube-api-access-sp55l\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.801581 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8d1baecf-2b7e-418b-8c64-95b6551f365e-cache\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.801599 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.801821 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.801875 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8d1baecf-2b7e-418b-8c64-95b6551f365e-lock\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.801938 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-lock\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.903546 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.903599 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.903620 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-lock\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.903644 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.903668 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-cache\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.903686 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r45b\" (UniqueName: \"kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-kube-api-access-8r45b\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.903719 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp55l\" (UniqueName: \"kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-kube-api-access-sp55l\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.903736 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8d1baecf-2b7e-418b-8c64-95b6551f365e-cache\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.903753 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.903766 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.903798 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-1: configmap "swift-ring-files" not found Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.903828 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.903848 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-2: configmap "swift-ring-files" not found Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.903860 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift podName:8d1baecf-2b7e-418b-8c64-95b6551f365e nodeName:}" failed. No retries permitted until 2026-01-31 15:15:47.403833198 +0000 UTC m=+1267.158571581 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift") pod "swift-storage-1" (UID: "8d1baecf-2b7e-418b-8c64-95b6551f365e") : configmap "swift-ring-files" not found Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.903776 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.903900 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift podName:e6ace0ac-a7c8-4413-90ee-53d6bf699eef nodeName:}" failed. No retries permitted until 2026-01-31 15:15:47.40387838 +0000 UTC m=+1267.158616673 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift") pod "swift-storage-2" (UID: "e6ace0ac-a7c8-4413-90ee-53d6bf699eef") : configmap "swift-ring-files" not found Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.903940 4763 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") device mount path \"/mnt/openstack/pv03\"" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.903934 4763 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") device mount path \"/mnt/openstack/pv09\"" pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.903965 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8d1baecf-2b7e-418b-8c64-95b6551f365e-lock\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.904054 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-lock\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.904098 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.904135 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9hz8\" (UniqueName: \"kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-kube-api-access-n9hz8\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.904159 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-cache\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.904292 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-lock\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.904391 4763 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") device mount path \"/mnt/openstack/pv11\"" pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.904657 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-cache\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.905128 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8d1baecf-2b7e-418b-8c64-95b6551f365e-lock\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.904747 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.904794 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-lock\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.904849 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-cache\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.905166 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.904831 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8d1baecf-2b7e-418b-8c64-95b6551f365e-cache\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.905252 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift podName:0a5bfb32-7eae-4b04-9aee-d0873f0c93b9 nodeName:}" failed. No retries permitted until 2026-01-31 15:15:47.405230935 +0000 UTC m=+1267.159969248 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift") pod "swift-storage-0" (UID: "0a5bfb32-7eae-4b04-9aee-d0873f0c93b9") : configmap "swift-ring-files" not found Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.924485 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.924556 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.926489 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r45b\" (UniqueName: \"kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-kube-api-access-8r45b\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.927478 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9hz8\" (UniqueName: \"kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-kube-api-access-n9hz8\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.929431 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp55l\" (UniqueName: \"kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-kube-api-access-sp55l\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.935878 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.170049 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj"] Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.171447 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.174110 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-proxy-config-data" Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.180465 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj"] Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.309662 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.309782 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a76b9c98-a93e-4935-947c-9ecf237b7a97-log-httpd\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.309846 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f758c\" (UniqueName: \"kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-kube-api-access-f758c\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.309918 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a76b9c98-a93e-4935-947c-9ecf237b7a97-config-data\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.309942 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a76b9c98-a93e-4935-947c-9ecf237b7a97-run-httpd\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.411320 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.411454 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.411488 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a76b9c98-a93e-4935-947c-9ecf237b7a97-log-httpd\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.411546 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f758c\" (UniqueName: \"kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-kube-api-access-f758c\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.411606 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.411658 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.411749 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a76b9c98-a93e-4935-947c-9ecf237b7a97-config-data\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.411804 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a76b9c98-a93e-4935-947c-9ecf237b7a97-run-httpd\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:15:47 crc kubenswrapper[4763]: E0131 15:15:47.411492 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:47 crc kubenswrapper[4763]: E0131 15:15:47.411848 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj: configmap "swift-ring-files" not found Jan 31 15:15:47 crc kubenswrapper[4763]: E0131 15:15:47.411888 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift podName:a76b9c98-a93e-4935-947c-9ecf237b7a97 nodeName:}" failed. No retries permitted until 2026-01-31 15:15:47.911871873 +0000 UTC m=+1267.666610156 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift") pod "swift-proxy-7d8cf99555-brqpj" (UID: "a76b9c98-a93e-4935-947c-9ecf237b7a97") : configmap "swift-ring-files" not found Jan 31 15:15:47 crc kubenswrapper[4763]: E0131 15:15:47.411550 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:47 crc kubenswrapper[4763]: E0131 15:15:47.412045 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:15:47 crc kubenswrapper[4763]: E0131 15:15:47.412074 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift podName:0a5bfb32-7eae-4b04-9aee-d0873f0c93b9 nodeName:}" failed. No retries permitted until 2026-01-31 15:15:48.412064328 +0000 UTC m=+1268.166802711 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift") pod "swift-storage-0" (UID: "0a5bfb32-7eae-4b04-9aee-d0873f0c93b9") : configmap "swift-ring-files" not found Jan 31 15:15:47 crc kubenswrapper[4763]: E0131 15:15:47.411832 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:47 crc kubenswrapper[4763]: E0131 15:15:47.412091 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-1: configmap "swift-ring-files" not found Jan 31 15:15:47 crc kubenswrapper[4763]: E0131 15:15:47.412114 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift podName:8d1baecf-2b7e-418b-8c64-95b6551f365e nodeName:}" failed. No retries permitted until 2026-01-31 15:15:48.412106519 +0000 UTC m=+1268.166844932 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift") pod "swift-storage-1" (UID: "8d1baecf-2b7e-418b-8c64-95b6551f365e") : configmap "swift-ring-files" not found Jan 31 15:15:47 crc kubenswrapper[4763]: E0131 15:15:47.412178 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:47 crc kubenswrapper[4763]: E0131 15:15:47.412187 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-2: configmap "swift-ring-files" not found Jan 31 15:15:47 crc kubenswrapper[4763]: E0131 15:15:47.412209 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift podName:e6ace0ac-a7c8-4413-90ee-53d6bf699eef nodeName:}" failed. No retries permitted until 2026-01-31 15:15:48.412200751 +0000 UTC m=+1268.166939144 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift") pod "swift-storage-2" (UID: "e6ace0ac-a7c8-4413-90ee-53d6bf699eef") : configmap "swift-ring-files" not found Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.412526 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a76b9c98-a93e-4935-947c-9ecf237b7a97-log-httpd\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.412600 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a76b9c98-a93e-4935-947c-9ecf237b7a97-run-httpd\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.416146 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a76b9c98-a93e-4935-947c-9ecf237b7a97-config-data\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.433928 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f758c\" (UniqueName: \"kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-kube-api-access-f758c\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.917788 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:15:47 crc kubenswrapper[4763]: E0131 15:15:47.918067 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:47 crc kubenswrapper[4763]: E0131 15:15:47.918110 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj: configmap "swift-ring-files" not found Jan 31 15:15:47 crc kubenswrapper[4763]: E0131 15:15:47.918210 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift podName:a76b9c98-a93e-4935-947c-9ecf237b7a97 nodeName:}" failed. No retries permitted until 2026-01-31 15:15:48.918182472 +0000 UTC m=+1268.672920805 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift") pod "swift-proxy-7d8cf99555-brqpj" (UID: "a76b9c98-a93e-4935-947c-9ecf237b7a97") : configmap "swift-ring-files" not found Jan 31 15:15:48 crc kubenswrapper[4763]: I0131 15:15:48.426322 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:48 crc kubenswrapper[4763]: E0131 15:15:48.426588 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:48 crc kubenswrapper[4763]: E0131 15:15:48.426619 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:15:48 crc kubenswrapper[4763]: I0131 15:15:48.426627 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:48 crc kubenswrapper[4763]: I0131 15:15:48.426661 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:48 crc kubenswrapper[4763]: E0131 15:15:48.426830 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:48 crc kubenswrapper[4763]: E0131 15:15:48.426844 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-2: configmap "swift-ring-files" not found Jan 31 15:15:48 crc kubenswrapper[4763]: E0131 15:15:48.426830 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift podName:0a5bfb32-7eae-4b04-9aee-d0873f0c93b9 nodeName:}" failed. No retries permitted until 2026-01-31 15:15:50.426667528 +0000 UTC m=+1270.181405861 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift") pod "swift-storage-0" (UID: "0a5bfb32-7eae-4b04-9aee-d0873f0c93b9") : configmap "swift-ring-files" not found Jan 31 15:15:48 crc kubenswrapper[4763]: E0131 15:15:48.426887 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift podName:e6ace0ac-a7c8-4413-90ee-53d6bf699eef nodeName:}" failed. No retries permitted until 2026-01-31 15:15:50.426875293 +0000 UTC m=+1270.181613596 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift") pod "swift-storage-2" (UID: "e6ace0ac-a7c8-4413-90ee-53d6bf699eef") : configmap "swift-ring-files" not found Jan 31 15:15:48 crc kubenswrapper[4763]: E0131 15:15:48.426946 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:48 crc kubenswrapper[4763]: E0131 15:15:48.426987 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-1: configmap "swift-ring-files" not found Jan 31 15:15:48 crc kubenswrapper[4763]: E0131 15:15:48.427063 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift podName:8d1baecf-2b7e-418b-8c64-95b6551f365e nodeName:}" failed. No retries permitted until 2026-01-31 15:15:50.427036497 +0000 UTC m=+1270.181774790 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift") pod "swift-storage-1" (UID: "8d1baecf-2b7e-418b-8c64-95b6551f365e") : configmap "swift-ring-files" not found Jan 31 15:15:48 crc kubenswrapper[4763]: I0131 15:15:48.933823 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:15:48 crc kubenswrapper[4763]: E0131 15:15:48.934118 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:48 crc kubenswrapper[4763]: E0131 15:15:48.934624 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj: configmap "swift-ring-files" not found Jan 31 15:15:48 crc kubenswrapper[4763]: E0131 15:15:48.934771 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift podName:a76b9c98-a93e-4935-947c-9ecf237b7a97 nodeName:}" failed. No retries permitted until 2026-01-31 15:15:50.934731613 +0000 UTC m=+1270.689469946 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift") pod "swift-proxy-7d8cf99555-brqpj" (UID: "a76b9c98-a93e-4935-947c-9ecf237b7a97") : configmap "swift-ring-files" not found Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.403637 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-5mn9x"] Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.405985 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.409136 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.410202 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.424330 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-5mn9x"] Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.459108 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-dispersionconf\") pod \"swift-ring-rebalance-5mn9x\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.459180 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-ring-data-devices\") pod \"swift-ring-rebalance-5mn9x\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.459219 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.459261 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.459301 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-swiftconf\") pod \"swift-ring-rebalance-5mn9x\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.459325 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r9bn\" (UniqueName: \"kubernetes.io/projected/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-kube-api-access-6r9bn\") pod \"swift-ring-rebalance-5mn9x\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.459386 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-etc-swift\") pod \"swift-ring-rebalance-5mn9x\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.459448 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.459470 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-scripts\") pod \"swift-ring-rebalance-5mn9x\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.459618 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-cw985"] Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.460873 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: E0131 15:15:50.459660 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:50 crc kubenswrapper[4763]: E0131 15:15:50.461129 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-1: configmap "swift-ring-files" not found Jan 31 15:15:50 crc kubenswrapper[4763]: E0131 15:15:50.459734 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:50 crc kubenswrapper[4763]: E0131 15:15:50.461209 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-2: configmap "swift-ring-files" not found Jan 31 15:15:50 crc kubenswrapper[4763]: E0131 15:15:50.461251 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift podName:e6ace0ac-a7c8-4413-90ee-53d6bf699eef nodeName:}" failed. No retries permitted until 2026-01-31 15:15:54.461232918 +0000 UTC m=+1274.215971211 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift") pod "swift-storage-2" (UID: "e6ace0ac-a7c8-4413-90ee-53d6bf699eef") : configmap "swift-ring-files" not found Jan 31 15:15:50 crc kubenswrapper[4763]: E0131 15:15:50.459802 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:50 crc kubenswrapper[4763]: E0131 15:15:50.461323 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:15:50 crc kubenswrapper[4763]: E0131 15:15:50.461435 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift podName:0a5bfb32-7eae-4b04-9aee-d0873f0c93b9 nodeName:}" failed. No retries permitted until 2026-01-31 15:15:54.461399252 +0000 UTC m=+1274.216137585 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift") pod "swift-storage-0" (UID: "0a5bfb32-7eae-4b04-9aee-d0873f0c93b9") : configmap "swift-ring-files" not found Jan 31 15:15:50 crc kubenswrapper[4763]: E0131 15:15:50.461633 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift podName:8d1baecf-2b7e-418b-8c64-95b6551f365e nodeName:}" failed. No retries permitted until 2026-01-31 15:15:54.461616807 +0000 UTC m=+1274.216355210 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift") pod "swift-storage-1" (UID: "8d1baecf-2b7e-418b-8c64-95b6551f365e") : configmap "swift-ring-files" not found Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.483622 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-5mn9x"] Jan 31 15:15:50 crc kubenswrapper[4763]: E0131 15:15:50.484335 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[dispersionconf etc-swift kube-api-access-6r9bn ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" podUID="11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.491174 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-cw985"] Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.496186 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.503423 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.561261 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5a85fc7e-226b-498d-9156-c4a5ecf075b9-dispersionconf\") pod \"swift-ring-rebalance-cw985\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.561482 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-scripts\") pod \"swift-ring-rebalance-5mn9x\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.561556 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a85fc7e-226b-498d-9156-c4a5ecf075b9-scripts\") pod \"swift-ring-rebalance-cw985\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.561617 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jxfn\" (UniqueName: \"kubernetes.io/projected/5a85fc7e-226b-498d-9156-c4a5ecf075b9-kube-api-access-7jxfn\") pod \"swift-ring-rebalance-cw985\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.561676 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-dispersionconf\") pod \"swift-ring-rebalance-5mn9x\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.561778 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-ring-data-devices\") pod \"swift-ring-rebalance-5mn9x\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.561877 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-swiftconf\") pod \"swift-ring-rebalance-5mn9x\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.562231 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-scripts\") pod \"swift-ring-rebalance-5mn9x\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.562424 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-ring-data-devices\") pod \"swift-ring-rebalance-5mn9x\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.562653 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r9bn\" (UniqueName: \"kubernetes.io/projected/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-kube-api-access-6r9bn\") pod \"swift-ring-rebalance-5mn9x\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.562771 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5a85fc7e-226b-498d-9156-c4a5ecf075b9-swiftconf\") pod \"swift-ring-rebalance-cw985\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.562852 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5a85fc7e-226b-498d-9156-c4a5ecf075b9-ring-data-devices\") pod \"swift-ring-rebalance-cw985\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.562922 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-etc-swift\") pod \"swift-ring-rebalance-5mn9x\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.562982 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5a85fc7e-226b-498d-9156-c4a5ecf075b9-etc-swift\") pod \"swift-ring-rebalance-cw985\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.563346 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-etc-swift\") pod \"swift-ring-rebalance-5mn9x\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.568034 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-dispersionconf\") pod \"swift-ring-rebalance-5mn9x\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.568892 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-swiftconf\") pod \"swift-ring-rebalance-5mn9x\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.587276 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r9bn\" (UniqueName: \"kubernetes.io/projected/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-kube-api-access-6r9bn\") pod \"swift-ring-rebalance-5mn9x\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.664355 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-swiftconf\") pod \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.664411 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-scripts\") pod \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.664446 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6r9bn\" (UniqueName: \"kubernetes.io/projected/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-kube-api-access-6r9bn\") pod \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.664465 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-dispersionconf\") pod \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.664661 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-ring-data-devices\") pod \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.664713 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-etc-swift\") pod \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.664890 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5a85fc7e-226b-498d-9156-c4a5ecf075b9-etc-swift\") pod \"swift-ring-rebalance-cw985\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.664960 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5a85fc7e-226b-498d-9156-c4a5ecf075b9-dispersionconf\") pod \"swift-ring-rebalance-cw985\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.665015 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a85fc7e-226b-498d-9156-c4a5ecf075b9-scripts\") pod \"swift-ring-rebalance-cw985\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.665045 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jxfn\" (UniqueName: \"kubernetes.io/projected/5a85fc7e-226b-498d-9156-c4a5ecf075b9-kube-api-access-7jxfn\") pod \"swift-ring-rebalance-cw985\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.665139 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5a85fc7e-226b-498d-9156-c4a5ecf075b9-swiftconf\") pod \"swift-ring-rebalance-cw985\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.665013 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-scripts" (OuterVolumeSpecName: "scripts") pod "11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51" (UID: "11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.665493 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51" (UID: "11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.665730 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5a85fc7e-226b-498d-9156-c4a5ecf075b9-ring-data-devices\") pod \"swift-ring-rebalance-cw985\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.665796 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.665813 4763 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.665794 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5a85fc7e-226b-498d-9156-c4a5ecf075b9-etc-swift\") pod \"swift-ring-rebalance-cw985\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.666050 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a85fc7e-226b-498d-9156-c4a5ecf075b9-scripts\") pod \"swift-ring-rebalance-cw985\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.666535 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5a85fc7e-226b-498d-9156-c4a5ecf075b9-ring-data-devices\") pod \"swift-ring-rebalance-cw985\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.666946 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51" (UID: "11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.669483 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51" (UID: "11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.669644 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-kube-api-access-6r9bn" (OuterVolumeSpecName: "kube-api-access-6r9bn") pod "11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51" (UID: "11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51"). InnerVolumeSpecName "kube-api-access-6r9bn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.669963 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51" (UID: "11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.670178 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5a85fc7e-226b-498d-9156-c4a5ecf075b9-dispersionconf\") pod \"swift-ring-rebalance-cw985\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.670205 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5a85fc7e-226b-498d-9156-c4a5ecf075b9-swiftconf\") pod \"swift-ring-rebalance-cw985\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.686750 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jxfn\" (UniqueName: \"kubernetes.io/projected/5a85fc7e-226b-498d-9156-c4a5ecf075b9-kube-api-access-7jxfn\") pod \"swift-ring-rebalance-cw985\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.767361 4763 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.767581 4763 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.767714 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6r9bn\" (UniqueName: \"kubernetes.io/projected/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-kube-api-access-6r9bn\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.767804 4763 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.777904 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.970345 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:15:50 crc kubenswrapper[4763]: E0131 15:15:50.970606 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:50 crc kubenswrapper[4763]: E0131 15:15:50.970753 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj: configmap "swift-ring-files" not found Jan 31 15:15:50 crc kubenswrapper[4763]: E0131 15:15:50.970838 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift podName:a76b9c98-a93e-4935-947c-9ecf237b7a97 nodeName:}" failed. No retries permitted until 2026-01-31 15:15:54.970812592 +0000 UTC m=+1274.725550985 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift") pod "swift-proxy-7d8cf99555-brqpj" (UID: "a76b9c98-a93e-4935-947c-9ecf237b7a97") : configmap "swift-ring-files" not found Jan 31 15:15:51 crc kubenswrapper[4763]: I0131 15:15:51.190839 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-cw985"] Jan 31 15:15:51 crc kubenswrapper[4763]: I0131 15:15:51.505241 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:51 crc kubenswrapper[4763]: I0131 15:15:51.505247 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-cw985" event={"ID":"5a85fc7e-226b-498d-9156-c4a5ecf075b9","Type":"ContainerStarted","Data":"0df182416b7bb3077be071e835d5e5e14d5a8b304cf30505e3ab3257400dd215"} Jan 31 15:15:51 crc kubenswrapper[4763]: I0131 15:15:51.505857 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-cw985" event={"ID":"5a85fc7e-226b-498d-9156-c4a5ecf075b9","Type":"ContainerStarted","Data":"e919616136f82fa408c8b1e00ee331214a152822ef67ade82653a25aa1eb565a"} Jan 31 15:15:51 crc kubenswrapper[4763]: I0131 15:15:51.525115 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-cw985" podStartSLOduration=1.525084262 podStartE2EDuration="1.525084262s" podCreationTimestamp="2026-01-31 15:15:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:15:51.523251023 +0000 UTC m=+1271.277989356" watchObservedRunningTime="2026-01-31 15:15:51.525084262 +0000 UTC m=+1271.279822595" Jan 31 15:15:51 crc kubenswrapper[4763]: I0131 15:15:51.563958 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-5mn9x"] Jan 31 15:15:51 crc kubenswrapper[4763]: I0131 15:15:51.563999 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-5mn9x"] Jan 31 15:15:53 crc kubenswrapper[4763]: I0131 15:15:53.049949 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51" path="/var/lib/kubelet/pods/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51/volumes" Jan 31 15:15:54 crc kubenswrapper[4763]: I0131 15:15:54.552636 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:54 crc kubenswrapper[4763]: E0131 15:15:54.552974 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:54 crc kubenswrapper[4763]: E0131 15:15:54.553028 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:15:54 crc kubenswrapper[4763]: I0131 15:15:54.553050 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:54 crc kubenswrapper[4763]: E0131 15:15:54.553122 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift podName:0a5bfb32-7eae-4b04-9aee-d0873f0c93b9 nodeName:}" failed. No retries permitted until 2026-01-31 15:16:02.553088285 +0000 UTC m=+1282.307826618 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift") pod "swift-storage-0" (UID: "0a5bfb32-7eae-4b04-9aee-d0873f0c93b9") : configmap "swift-ring-files" not found Jan 31 15:15:54 crc kubenswrapper[4763]: I0131 15:15:54.553180 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:54 crc kubenswrapper[4763]: E0131 15:15:54.553234 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:54 crc kubenswrapper[4763]: E0131 15:15:54.553258 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-1: configmap "swift-ring-files" not found Jan 31 15:15:54 crc kubenswrapper[4763]: E0131 15:15:54.553324 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift podName:8d1baecf-2b7e-418b-8c64-95b6551f365e nodeName:}" failed. No retries permitted until 2026-01-31 15:16:02.553301271 +0000 UTC m=+1282.308039604 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift") pod "swift-storage-1" (UID: "8d1baecf-2b7e-418b-8c64-95b6551f365e") : configmap "swift-ring-files" not found Jan 31 15:15:54 crc kubenswrapper[4763]: E0131 15:15:54.553544 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:54 crc kubenswrapper[4763]: E0131 15:15:54.553566 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-2: configmap "swift-ring-files" not found Jan 31 15:15:54 crc kubenswrapper[4763]: E0131 15:15:54.553612 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift podName:e6ace0ac-a7c8-4413-90ee-53d6bf699eef nodeName:}" failed. No retries permitted until 2026-01-31 15:16:02.553596749 +0000 UTC m=+1282.308335082 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift") pod "swift-storage-2" (UID: "e6ace0ac-a7c8-4413-90ee-53d6bf699eef") : configmap "swift-ring-files" not found Jan 31 15:15:55 crc kubenswrapper[4763]: I0131 15:15:55.060623 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:15:55 crc kubenswrapper[4763]: E0131 15:15:55.060760 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:55 crc kubenswrapper[4763]: E0131 15:15:55.060773 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj: configmap "swift-ring-files" not found Jan 31 15:15:55 crc kubenswrapper[4763]: E0131 15:15:55.060810 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift podName:a76b9c98-a93e-4935-947c-9ecf237b7a97 nodeName:}" failed. No retries permitted until 2026-01-31 15:16:03.06079686 +0000 UTC m=+1282.815535153 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift") pod "swift-proxy-7d8cf99555-brqpj" (UID: "a76b9c98-a93e-4935-947c-9ecf237b7a97") : configmap "swift-ring-files" not found Jan 31 15:16:01 crc kubenswrapper[4763]: I0131 15:16:01.586894 4763 generic.go:334] "Generic (PLEG): container finished" podID="5a85fc7e-226b-498d-9156-c4a5ecf075b9" containerID="0df182416b7bb3077be071e835d5e5e14d5a8b304cf30505e3ab3257400dd215" exitCode=0 Jan 31 15:16:01 crc kubenswrapper[4763]: I0131 15:16:01.587012 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-cw985" event={"ID":"5a85fc7e-226b-498d-9156-c4a5ecf075b9","Type":"ContainerDied","Data":"0df182416b7bb3077be071e835d5e5e14d5a8b304cf30505e3ab3257400dd215"} Jan 31 15:16:02 crc kubenswrapper[4763]: I0131 15:16:02.587459 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:16:02 crc kubenswrapper[4763]: I0131 15:16:02.587533 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:16:02 crc kubenswrapper[4763]: I0131 15:16:02.587566 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:16:02 crc kubenswrapper[4763]: I0131 15:16:02.594837 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:16:02 crc kubenswrapper[4763]: I0131 15:16:02.594972 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:16:02 crc kubenswrapper[4763]: I0131 15:16:02.595655 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:16:02 crc kubenswrapper[4763]: I0131 15:16:02.627307 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:16:02 crc kubenswrapper[4763]: I0131 15:16:02.855392 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:16:02 crc kubenswrapper[4763]: I0131 15:16:02.873126 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:16:02 crc kubenswrapper[4763]: I0131 15:16:02.974117 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:16:02 crc kubenswrapper[4763]: I0131 15:16:02.996336 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5a85fc7e-226b-498d-9156-c4a5ecf075b9-dispersionconf\") pod \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " Jan 31 15:16:02 crc kubenswrapper[4763]: I0131 15:16:02.996423 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jxfn\" (UniqueName: \"kubernetes.io/projected/5a85fc7e-226b-498d-9156-c4a5ecf075b9-kube-api-access-7jxfn\") pod \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " Jan 31 15:16:02 crc kubenswrapper[4763]: I0131 15:16:02.996598 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5a85fc7e-226b-498d-9156-c4a5ecf075b9-swiftconf\") pod \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " Jan 31 15:16:02 crc kubenswrapper[4763]: I0131 15:16:02.996655 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a85fc7e-226b-498d-9156-c4a5ecf075b9-scripts\") pod \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " Jan 31 15:16:02 crc kubenswrapper[4763]: I0131 15:16:02.996672 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5a85fc7e-226b-498d-9156-c4a5ecf075b9-etc-swift\") pod \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " Jan 31 15:16:02 crc kubenswrapper[4763]: I0131 15:16:02.998459 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a85fc7e-226b-498d-9156-c4a5ecf075b9-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "5a85fc7e-226b-498d-9156-c4a5ecf075b9" (UID: "5a85fc7e-226b-498d-9156-c4a5ecf075b9"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.002863 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a85fc7e-226b-498d-9156-c4a5ecf075b9-kube-api-access-7jxfn" (OuterVolumeSpecName: "kube-api-access-7jxfn") pod "5a85fc7e-226b-498d-9156-c4a5ecf075b9" (UID: "5a85fc7e-226b-498d-9156-c4a5ecf075b9"). InnerVolumeSpecName "kube-api-access-7jxfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.021435 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a85fc7e-226b-498d-9156-c4a5ecf075b9-scripts" (OuterVolumeSpecName: "scripts") pod "5a85fc7e-226b-498d-9156-c4a5ecf075b9" (UID: "5a85fc7e-226b-498d-9156-c4a5ecf075b9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.037703 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a85fc7e-226b-498d-9156-c4a5ecf075b9-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "5a85fc7e-226b-498d-9156-c4a5ecf075b9" (UID: "5a85fc7e-226b-498d-9156-c4a5ecf075b9"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.039554 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a85fc7e-226b-498d-9156-c4a5ecf075b9-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "5a85fc7e-226b-498d-9156-c4a5ecf075b9" (UID: "5a85fc7e-226b-498d-9156-c4a5ecf075b9"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.088977 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.097391 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5a85fc7e-226b-498d-9156-c4a5ecf075b9-ring-data-devices\") pod \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.097595 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.097814 4763 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5a85fc7e-226b-498d-9156-c4a5ecf075b9-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.097831 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jxfn\" (UniqueName: \"kubernetes.io/projected/5a85fc7e-226b-498d-9156-c4a5ecf075b9-kube-api-access-7jxfn\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.097846 4763 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5a85fc7e-226b-498d-9156-c4a5ecf075b9-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.097862 4763 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a85fc7e-226b-498d-9156-c4a5ecf075b9-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.097871 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5a85fc7e-226b-498d-9156-c4a5ecf075b9-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.097966 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a85fc7e-226b-498d-9156-c4a5ecf075b9-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "5a85fc7e-226b-498d-9156-c4a5ecf075b9" (UID: "5a85fc7e-226b-498d-9156-c4a5ecf075b9"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.104065 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.198817 4763 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5a85fc7e-226b-498d-9156-c4a5ecf075b9-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.288276 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:16:03 crc kubenswrapper[4763]: W0131 15:16:03.305922 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a5bfb32_7eae_4b04_9aee_d0873f0c93b9.slice/crio-17d6e2ea7e01a1661ce12761f2f04f8c25d3d7cfb02f9b9ef0ef2888b0bc4824 WatchSource:0}: Error finding container 17d6e2ea7e01a1661ce12761f2f04f8c25d3d7cfb02f9b9ef0ef2888b0bc4824: Status 404 returned error can't find the container with id 17d6e2ea7e01a1661ce12761f2f04f8c25d3d7cfb02f9b9ef0ef2888b0bc4824 Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.361453 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.394186 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.621260 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerStarted","Data":"9ce27e519b496e795e29d01f6e35ba0dbe740066395319ccd8400796bf4190cb"} Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.621502 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerStarted","Data":"1d4cf1913d51894066c90a63ebfe91dd9186021fe6b288d04eb4138560d222cd"} Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.640527 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerStarted","Data":"710120310f88e5c722eacaf49bc091b604c9117b11eb65ce63fd698be02b6699"} Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.640587 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerStarted","Data":"99ceb2ba2010e4648343e1238ccdc00c944820acf210df397fc1aa1ebd073a62"} Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.640605 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerStarted","Data":"0c8b0727815b22780ef52e9340937734001c60d592a0efa293171a6a99f631b8"} Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.640617 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerStarted","Data":"54e9716be29a1a3e98b1c62af30cb48d8d18ed8bc33c1e831e29d90d1bbee6be"} Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.642311 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-cw985" event={"ID":"5a85fc7e-226b-498d-9156-c4a5ecf075b9","Type":"ContainerDied","Data":"e919616136f82fa408c8b1e00ee331214a152822ef67ade82653a25aa1eb565a"} Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.642342 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e919616136f82fa408c8b1e00ee331214a152822ef67ade82653a25aa1eb565a" Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.642409 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.655987 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerStarted","Data":"2a92b2bd4fc8d6dca2fcd21333d8ea9dc1bea550c48e4c4023603133141572ad"} Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.656041 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerStarted","Data":"17d6e2ea7e01a1661ce12761f2f04f8c25d3d7cfb02f9b9ef0ef2888b0bc4824"} Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.880764 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj"] Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.669999 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" event={"ID":"a76b9c98-a93e-4935-947c-9ecf237b7a97","Type":"ContainerStarted","Data":"feb9eca2f3460795772517c1281dac16f555d0d80bd9ba799ff73f601b5b71c8"} Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.670240 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.670252 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" event={"ID":"a76b9c98-a93e-4935-947c-9ecf237b7a97","Type":"ContainerStarted","Data":"2bbe980cdd23a298b995253469039eca5a71af206ca6fa11e2f7a2a5c08d968c"} Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.670261 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" event={"ID":"a76b9c98-a93e-4935-947c-9ecf237b7a97","Type":"ContainerStarted","Data":"d3926134dfc43380c3c616215cd0f1de55c1eb370aa433868d96185fd6990644"} Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.670275 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.673236 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerStarted","Data":"46bb08319fc51c2ac2e298b3d88809c97e2206ce13b2762bb97ee19fa37761d9"} Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.673262 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerStarted","Data":"d21b593c360d355cf6f976e7a33dd5c9a7af1da589078440cac056c5b3195552"} Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.673271 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerStarted","Data":"539252f99bc3e6264ace92eb1a171fa552e64dc1e5ab28a67e1806e3665008b7"} Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.673279 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerStarted","Data":"8743be8babd948dd0c5cadcbb327888a64b81d91784dcd88e4edc80760703747"} Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.673288 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerStarted","Data":"6436423e21963e2b944d7661270d579b854013f8d01c88fd036a9b4a15a61846"} Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.687351 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerStarted","Data":"400f9226fbc81108ff15e964994bc02c7044985a7b88bdcfc5b9ec38e0ec0971"} Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.687403 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerStarted","Data":"cde84086c356956bf7c7e9e311a2b5c0dfd023d642e5c211c956f0606f3b6634"} Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.687417 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerStarted","Data":"3343428eafc3c7fe632334cb15bc487e6a2cf486419894cbf84ef0039e8d4a0d"} Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.687430 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerStarted","Data":"619d6d7dc92d20efbdc5a9e71ba3bb545609a50170ca45c2f90260cb8b92859d"} Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.693374 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" podStartSLOduration=17.693350349 podStartE2EDuration="17.693350349s" podCreationTimestamp="2026-01-31 15:15:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:16:04.690296459 +0000 UTC m=+1284.445034762" watchObservedRunningTime="2026-01-31 15:16:04.693350349 +0000 UTC m=+1284.448088642" Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.727802 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerStarted","Data":"fbf72198234fda54505fa079d75aa79a7acaa34fd24b31ddd293f0aae93e0c93"} Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.727838 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerStarted","Data":"9a8de8d27e0778796c6a4baff03a1d1d922e9ae78f97728f4ae4aaf7fa341563"} Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.727847 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerStarted","Data":"86c85c15b961f6795b6b5dc674ffc310871f76d3dbb1eb7225326dc937e30e64"} Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.727857 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerStarted","Data":"1351528e398244f79dc02492c56eb7c0b5cf97f5af4ab495fdf34673f2be4e3a"} Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.727866 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerStarted","Data":"6bbf9c9a63d33d2c6b6e0178b1ade00637993606e119582492e66b6a24013ab8"} Jan 31 15:16:05 crc kubenswrapper[4763]: I0131 15:16:05.758078 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerStarted","Data":"483ce81d4f59c1977d7f239e53747fee35232b1656fc7602c1a77108638d9db2"} Jan 31 15:16:05 crc kubenswrapper[4763]: I0131 15:16:05.758371 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerStarted","Data":"4125ef4f4111a2faea3497473d4e28a723f901956d45be9b6df5be593bcb094a"} Jan 31 15:16:05 crc kubenswrapper[4763]: I0131 15:16:05.758381 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerStarted","Data":"1edb0b7f2c7c7855ec59a3e46c12b4d9f8f2128670389c78068047481b466406"} Jan 31 15:16:05 crc kubenswrapper[4763]: I0131 15:16:05.758390 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerStarted","Data":"afce4d6edbdd41898c2d05bcfd465a50bf7fbe4f924d1536d022f9b7bde6cc4a"} Jan 31 15:16:05 crc kubenswrapper[4763]: I0131 15:16:05.758398 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerStarted","Data":"f10d79c1c2a03423d0f8e9bea742ee0f71b0de111c6c2b0530f79b1bb13590fe"} Jan 31 15:16:05 crc kubenswrapper[4763]: I0131 15:16:05.771041 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerStarted","Data":"2939485c2d7229a2f4e693e47c132dadba16e2fdb364592f267d9a10abee7f3b"} Jan 31 15:16:05 crc kubenswrapper[4763]: I0131 15:16:05.771103 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerStarted","Data":"4992aa25b0e802e8ef4afebdda60e6de87c86a4603281c770436bd0619a4396b"} Jan 31 15:16:05 crc kubenswrapper[4763]: I0131 15:16:05.771112 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerStarted","Data":"98598fa6e6e265c08f4b896f2e536f3ceb4a0027cac1512dcf4f464c4d6a0e2d"} Jan 31 15:16:05 crc kubenswrapper[4763]: I0131 15:16:05.771121 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerStarted","Data":"be15a349543095a208d3733f5743e24532655dd5a6a6458457ba577f54adc832"} Jan 31 15:16:05 crc kubenswrapper[4763]: I0131 15:16:05.771130 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerStarted","Data":"92ad90a8f7a31ed2cf85f0757c71ef34da8f0c359347e86a4bcef54ce85516a6"} Jan 31 15:16:05 crc kubenswrapper[4763]: I0131 15:16:05.789939 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerStarted","Data":"5de93fda3e4eeec09c3e0cab0c53f8bd9bc5a576f08e2f6a3e358bb501d0aee7"} Jan 31 15:16:05 crc kubenswrapper[4763]: I0131 15:16:05.790134 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerStarted","Data":"4007498dac8fd3b8123d95c0037cc87ebfb6676c6920014087622d824443355a"} Jan 31 15:16:05 crc kubenswrapper[4763]: I0131 15:16:05.790190 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerStarted","Data":"811bfef1f45e1c8bfb983ccdb9950299ee77652d3e6b07f11a1f38dcaa006989"} Jan 31 15:16:05 crc kubenswrapper[4763]: I0131 15:16:05.790262 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerStarted","Data":"433775c4538892b2b06557027ca728e6b8b86916941810cde9bf0aaa7cec78dd"} Jan 31 15:16:05 crc kubenswrapper[4763]: I0131 15:16:05.790319 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerStarted","Data":"18e785ee82660570c7a3c1f8a825fe58e5562959092c137ca7e8ad12f67b2cdf"} Jan 31 15:16:06 crc kubenswrapper[4763]: I0131 15:16:06.809133 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerStarted","Data":"36471f31ca52c785e38c6b2b66ccd5857f6ef478d9ab8974c38189fbf0e27a7c"} Jan 31 15:16:06 crc kubenswrapper[4763]: I0131 15:16:06.809559 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerStarted","Data":"a193fbbc6d4c1b876140e857534a3e3476054468d680fb5032ef95fc43449eec"} Jan 31 15:16:06 crc kubenswrapper[4763]: I0131 15:16:06.809581 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerStarted","Data":"aa4453d984b8623efdeb6c14caeec9684c9789a6bbf3f070fda2ae53d211bc67"} Jan 31 15:16:06 crc kubenswrapper[4763]: I0131 15:16:06.809599 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerStarted","Data":"9716359f4da3a71b9390a156a58d83e5742038cd7c70d2aaa6dddd57c4c7402f"} Jan 31 15:16:06 crc kubenswrapper[4763]: I0131 15:16:06.822653 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerStarted","Data":"73ce9e2e7676ac3fba63ec19a18765b3c9e2f7127b13b82b0a8da016af057bd4"} Jan 31 15:16:06 crc kubenswrapper[4763]: I0131 15:16:06.822737 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerStarted","Data":"c8c0e78ca1151a94155419bd0418b77c66fc386384f22eb1e0e7dc241590a78a"} Jan 31 15:16:06 crc kubenswrapper[4763]: I0131 15:16:06.822780 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerStarted","Data":"a2e002a6a1e3984c24d1cacf4ede42929c80f438b7e4fd3d25cc84706f472d24"} Jan 31 15:16:06 crc kubenswrapper[4763]: I0131 15:16:06.822793 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerStarted","Data":"1200ff3e3aca2458c95ff5ac8deaa7609c141f64452aa6fc569865a2a07ba594"} Jan 31 15:16:06 crc kubenswrapper[4763]: I0131 15:16:06.822804 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerStarted","Data":"3fc180b723e4c8f13af50f16bf6dd031dc4b30e6837b659d53f2126db991cdef"} Jan 31 15:16:06 crc kubenswrapper[4763]: I0131 15:16:06.831266 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerStarted","Data":"bce9cd376705065c5f2f54f669ef4c9cccbd07ea487a0242cf40f637878b3c9e"} Jan 31 15:16:06 crc kubenswrapper[4763]: I0131 15:16:06.831330 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerStarted","Data":"ec009865c52a8d1e41b5965f9b645455af51d67839966fb510adf5cdeeb90ab6"} Jan 31 15:16:06 crc kubenswrapper[4763]: I0131 15:16:06.863232 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-0" podStartSLOduration=21.863209645 podStartE2EDuration="21.863209645s" podCreationTimestamp="2026-01-31 15:15:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:16:06.856130369 +0000 UTC m=+1286.610868692" watchObservedRunningTime="2026-01-31 15:16:06.863209645 +0000 UTC m=+1286.617947938" Jan 31 15:16:06 crc kubenswrapper[4763]: I0131 15:16:06.958367 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-2" podStartSLOduration=21.958345025 podStartE2EDuration="21.958345025s" podCreationTimestamp="2026-01-31 15:15:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:16:06.957191286 +0000 UTC m=+1286.711929599" watchObservedRunningTime="2026-01-31 15:16:06.958345025 +0000 UTC m=+1286.713083318" Jan 31 15:16:06 crc kubenswrapper[4763]: I0131 15:16:06.961523 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-1" podStartSLOduration=21.961509069 podStartE2EDuration="21.961509069s" podCreationTimestamp="2026-01-31 15:15:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:16:06.916268269 +0000 UTC m=+1286.671006572" watchObservedRunningTime="2026-01-31 15:16:06.961509069 +0000 UTC m=+1286.716247362" Jan 31 15:16:13 crc kubenswrapper[4763]: I0131 15:16:13.397034 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:16:13 crc kubenswrapper[4763]: I0131 15:16:13.398566 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.177639 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.177783 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.552284 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8"] Jan 31 15:16:14 crc kubenswrapper[4763]: E0131 15:16:14.552794 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a85fc7e-226b-498d-9156-c4a5ecf075b9" containerName="swift-ring-rebalance" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.552810 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a85fc7e-226b-498d-9156-c4a5ecf075b9" containerName="swift-ring-rebalance" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.553019 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a85fc7e-226b-498d-9156-c4a5ecf075b9" containerName="swift-ring-rebalance" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.553626 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.556745 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.557463 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.568544 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8"] Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.692651 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-swiftconf\") pod \"swift-ring-rebalance-debug-gg6w8\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.692778 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-etc-swift\") pod \"swift-ring-rebalance-debug-gg6w8\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.692867 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzq9w\" (UniqueName: \"kubernetes.io/projected/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-kube-api-access-hzq9w\") pod \"swift-ring-rebalance-debug-gg6w8\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.693000 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-dispersionconf\") pod \"swift-ring-rebalance-debug-gg6w8\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.693055 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-ring-data-devices\") pod \"swift-ring-rebalance-debug-gg6w8\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.693114 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-scripts\") pod \"swift-ring-rebalance-debug-gg6w8\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.794804 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzq9w\" (UniqueName: \"kubernetes.io/projected/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-kube-api-access-hzq9w\") pod \"swift-ring-rebalance-debug-gg6w8\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.794880 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-dispersionconf\") pod \"swift-ring-rebalance-debug-gg6w8\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.794906 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-ring-data-devices\") pod \"swift-ring-rebalance-debug-gg6w8\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.794932 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-scripts\") pod \"swift-ring-rebalance-debug-gg6w8\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.795025 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-swiftconf\") pod \"swift-ring-rebalance-debug-gg6w8\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.795045 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-etc-swift\") pod \"swift-ring-rebalance-debug-gg6w8\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.795999 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-etc-swift\") pod \"swift-ring-rebalance-debug-gg6w8\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.797286 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-ring-data-devices\") pod \"swift-ring-rebalance-debug-gg6w8\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.797333 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-scripts\") pod \"swift-ring-rebalance-debug-gg6w8\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.803201 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-swiftconf\") pod \"swift-ring-rebalance-debug-gg6w8\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.805390 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-dispersionconf\") pod \"swift-ring-rebalance-debug-gg6w8\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.817950 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzq9w\" (UniqueName: \"kubernetes.io/projected/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-kube-api-access-hzq9w\") pod \"swift-ring-rebalance-debug-gg6w8\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.877282 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:15 crc kubenswrapper[4763]: I0131 15:16:15.349333 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8"] Jan 31 15:16:15 crc kubenswrapper[4763]: W0131 15:16:15.351989 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f1fdbe2_6748_4c51_ac3f_dab0229c44cc.slice/crio-67d0dcf7e61bf52cb933ac95c51f0d8509d1d5f271dc71ff2cd0b95c5b935896 WatchSource:0}: Error finding container 67d0dcf7e61bf52cb933ac95c51f0d8509d1d5f271dc71ff2cd0b95c5b935896: Status 404 returned error can't find the container with id 67d0dcf7e61bf52cb933ac95c51f0d8509d1d5f271dc71ff2cd0b95c5b935896 Jan 31 15:16:15 crc kubenswrapper[4763]: I0131 15:16:15.924337 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" event={"ID":"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc","Type":"ContainerStarted","Data":"aa5b58c7ff93f3c15da7f7f96ecebd65245380f1df8d802df393a3683573f42b"} Jan 31 15:16:15 crc kubenswrapper[4763]: I0131 15:16:15.924414 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" event={"ID":"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc","Type":"ContainerStarted","Data":"67d0dcf7e61bf52cb933ac95c51f0d8509d1d5f271dc71ff2cd0b95c5b935896"} Jan 31 15:16:17 crc kubenswrapper[4763]: I0131 15:16:17.942980 4763 generic.go:334] "Generic (PLEG): container finished" podID="5f1fdbe2-6748-4c51-ac3f-dab0229c44cc" containerID="aa5b58c7ff93f3c15da7f7f96ecebd65245380f1df8d802df393a3683573f42b" exitCode=0 Jan 31 15:16:17 crc kubenswrapper[4763]: I0131 15:16:17.943036 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" event={"ID":"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc","Type":"ContainerDied","Data":"aa5b58c7ff93f3c15da7f7f96ecebd65245380f1df8d802df393a3683573f42b"} Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.308245 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.349804 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8"] Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.356474 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8"] Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.463608 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzq9w\" (UniqueName: \"kubernetes.io/projected/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-kube-api-access-hzq9w\") pod \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.463785 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-swiftconf\") pod \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.463910 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-etc-swift\") pod \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.464003 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-dispersionconf\") pod \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.464070 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-ring-data-devices\") pod \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.464105 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-scripts\") pod \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.465586 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "5f1fdbe2-6748-4c51-ac3f-dab0229c44cc" (UID: "5f1fdbe2-6748-4c51-ac3f-dab0229c44cc"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.466863 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "5f1fdbe2-6748-4c51-ac3f-dab0229c44cc" (UID: "5f1fdbe2-6748-4c51-ac3f-dab0229c44cc"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.469188 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-kube-api-access-hzq9w" (OuterVolumeSpecName: "kube-api-access-hzq9w") pod "5f1fdbe2-6748-4c51-ac3f-dab0229c44cc" (UID: "5f1fdbe2-6748-4c51-ac3f-dab0229c44cc"). InnerVolumeSpecName "kube-api-access-hzq9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.486501 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn"] Jan 31 15:16:19 crc kubenswrapper[4763]: E0131 15:16:19.486862 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f1fdbe2-6748-4c51-ac3f-dab0229c44cc" containerName="swift-ring-rebalance" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.486883 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f1fdbe2-6748-4c51-ac3f-dab0229c44cc" containerName="swift-ring-rebalance" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.487075 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f1fdbe2-6748-4c51-ac3f-dab0229c44cc" containerName="swift-ring-rebalance" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.489985 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.495026 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "5f1fdbe2-6748-4c51-ac3f-dab0229c44cc" (UID: "5f1fdbe2-6748-4c51-ac3f-dab0229c44cc"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.517041 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "5f1fdbe2-6748-4c51-ac3f-dab0229c44cc" (UID: "5f1fdbe2-6748-4c51-ac3f-dab0229c44cc"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.520364 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn"] Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.530291 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-scripts" (OuterVolumeSpecName: "scripts") pod "5f1fdbe2-6748-4c51-ac3f-dab0229c44cc" (UID: "5f1fdbe2-6748-4c51-ac3f-dab0229c44cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.566298 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/150fee52-9a9b-47cf-aeaf-1699d0cbe077-swiftconf\") pod \"swift-ring-rebalance-debug-r7mcn\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.566467 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/150fee52-9a9b-47cf-aeaf-1699d0cbe077-etc-swift\") pod \"swift-ring-rebalance-debug-r7mcn\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.566509 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8gkp\" (UniqueName: \"kubernetes.io/projected/150fee52-9a9b-47cf-aeaf-1699d0cbe077-kube-api-access-j8gkp\") pod \"swift-ring-rebalance-debug-r7mcn\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.566567 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/150fee52-9a9b-47cf-aeaf-1699d0cbe077-ring-data-devices\") pod \"swift-ring-rebalance-debug-r7mcn\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.566670 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/150fee52-9a9b-47cf-aeaf-1699d0cbe077-dispersionconf\") pod \"swift-ring-rebalance-debug-r7mcn\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.566849 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/150fee52-9a9b-47cf-aeaf-1699d0cbe077-scripts\") pod \"swift-ring-rebalance-debug-r7mcn\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.567049 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzq9w\" (UniqueName: \"kubernetes.io/projected/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-kube-api-access-hzq9w\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.567071 4763 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.567085 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.567096 4763 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.567108 4763 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.567119 4763 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.668532 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/150fee52-9a9b-47cf-aeaf-1699d0cbe077-scripts\") pod \"swift-ring-rebalance-debug-r7mcn\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.668619 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/150fee52-9a9b-47cf-aeaf-1699d0cbe077-swiftconf\") pod \"swift-ring-rebalance-debug-r7mcn\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.668720 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/150fee52-9a9b-47cf-aeaf-1699d0cbe077-etc-swift\") pod \"swift-ring-rebalance-debug-r7mcn\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.668744 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8gkp\" (UniqueName: \"kubernetes.io/projected/150fee52-9a9b-47cf-aeaf-1699d0cbe077-kube-api-access-j8gkp\") pod \"swift-ring-rebalance-debug-r7mcn\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.668775 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/150fee52-9a9b-47cf-aeaf-1699d0cbe077-ring-data-devices\") pod \"swift-ring-rebalance-debug-r7mcn\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.668804 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/150fee52-9a9b-47cf-aeaf-1699d0cbe077-dispersionconf\") pod \"swift-ring-rebalance-debug-r7mcn\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.669427 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/150fee52-9a9b-47cf-aeaf-1699d0cbe077-etc-swift\") pod \"swift-ring-rebalance-debug-r7mcn\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.669562 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/150fee52-9a9b-47cf-aeaf-1699d0cbe077-scripts\") pod \"swift-ring-rebalance-debug-r7mcn\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.670472 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/150fee52-9a9b-47cf-aeaf-1699d0cbe077-ring-data-devices\") pod \"swift-ring-rebalance-debug-r7mcn\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.672242 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/150fee52-9a9b-47cf-aeaf-1699d0cbe077-swiftconf\") pod \"swift-ring-rebalance-debug-r7mcn\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.672324 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/150fee52-9a9b-47cf-aeaf-1699d0cbe077-dispersionconf\") pod \"swift-ring-rebalance-debug-r7mcn\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.692450 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8gkp\" (UniqueName: \"kubernetes.io/projected/150fee52-9a9b-47cf-aeaf-1699d0cbe077-kube-api-access-j8gkp\") pod \"swift-ring-rebalance-debug-r7mcn\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.892397 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.965189 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67d0dcf7e61bf52cb933ac95c51f0d8509d1d5f271dc71ff2cd0b95c5b935896" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.965234 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:20 crc kubenswrapper[4763]: I0131 15:16:20.334390 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn"] Jan 31 15:16:20 crc kubenswrapper[4763]: I0131 15:16:20.977385 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" event={"ID":"150fee52-9a9b-47cf-aeaf-1699d0cbe077","Type":"ContainerStarted","Data":"c60e5d86263ce2d232f363b5e6d0ca6b837b598ceba6f85ee0bd89925cf0c6dd"} Jan 31 15:16:20 crc kubenswrapper[4763]: I0131 15:16:20.977731 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" event={"ID":"150fee52-9a9b-47cf-aeaf-1699d0cbe077","Type":"ContainerStarted","Data":"d5916ddcab4c821a804d1bbf18735ca780e3635e1775c3acaf28cc394d8be895"} Jan 31 15:16:21 crc kubenswrapper[4763]: I0131 15:16:21.013057 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" podStartSLOduration=2.013016418 podStartE2EDuration="2.013016418s" podCreationTimestamp="2026-01-31 15:16:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:16:21.00030358 +0000 UTC m=+1300.755041903" watchObservedRunningTime="2026-01-31 15:16:21.013016418 +0000 UTC m=+1300.767754721" Jan 31 15:16:21 crc kubenswrapper[4763]: I0131 15:16:21.056307 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f1fdbe2-6748-4c51-ac3f-dab0229c44cc" path="/var/lib/kubelet/pods/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc/volumes" Jan 31 15:16:23 crc kubenswrapper[4763]: I0131 15:16:23.000364 4763 generic.go:334] "Generic (PLEG): container finished" podID="150fee52-9a9b-47cf-aeaf-1699d0cbe077" containerID="c60e5d86263ce2d232f363b5e6d0ca6b837b598ceba6f85ee0bd89925cf0c6dd" exitCode=0 Jan 31 15:16:23 crc kubenswrapper[4763]: I0131 15:16:23.000497 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" event={"ID":"150fee52-9a9b-47cf-aeaf-1699d0cbe077","Type":"ContainerDied","Data":"c60e5d86263ce2d232f363b5e6d0ca6b837b598ceba6f85ee0bd89925cf0c6dd"} Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.406462 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.443104 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn"] Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.452439 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn"] Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.550290 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8gkp\" (UniqueName: \"kubernetes.io/projected/150fee52-9a9b-47cf-aeaf-1699d0cbe077-kube-api-access-j8gkp\") pod \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.550437 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/150fee52-9a9b-47cf-aeaf-1699d0cbe077-scripts\") pod \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.550476 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/150fee52-9a9b-47cf-aeaf-1699d0cbe077-swiftconf\") pod \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.550659 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/150fee52-9a9b-47cf-aeaf-1699d0cbe077-ring-data-devices\") pod \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.550793 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/150fee52-9a9b-47cf-aeaf-1699d0cbe077-etc-swift\") pod \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.550851 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/150fee52-9a9b-47cf-aeaf-1699d0cbe077-dispersionconf\") pod \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.551155 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/150fee52-9a9b-47cf-aeaf-1699d0cbe077-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "150fee52-9a9b-47cf-aeaf-1699d0cbe077" (UID: "150fee52-9a9b-47cf-aeaf-1699d0cbe077"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.551375 4763 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/150fee52-9a9b-47cf-aeaf-1699d0cbe077-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.552038 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/150fee52-9a9b-47cf-aeaf-1699d0cbe077-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "150fee52-9a9b-47cf-aeaf-1699d0cbe077" (UID: "150fee52-9a9b-47cf-aeaf-1699d0cbe077"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.557749 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/150fee52-9a9b-47cf-aeaf-1699d0cbe077-kube-api-access-j8gkp" (OuterVolumeSpecName: "kube-api-access-j8gkp") pod "150fee52-9a9b-47cf-aeaf-1699d0cbe077" (UID: "150fee52-9a9b-47cf-aeaf-1699d0cbe077"). InnerVolumeSpecName "kube-api-access-j8gkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.572903 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/150fee52-9a9b-47cf-aeaf-1699d0cbe077-scripts" (OuterVolumeSpecName: "scripts") pod "150fee52-9a9b-47cf-aeaf-1699d0cbe077" (UID: "150fee52-9a9b-47cf-aeaf-1699d0cbe077"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.576185 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/150fee52-9a9b-47cf-aeaf-1699d0cbe077-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "150fee52-9a9b-47cf-aeaf-1699d0cbe077" (UID: "150fee52-9a9b-47cf-aeaf-1699d0cbe077"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.579818 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/150fee52-9a9b-47cf-aeaf-1699d0cbe077-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "150fee52-9a9b-47cf-aeaf-1699d0cbe077" (UID: "150fee52-9a9b-47cf-aeaf-1699d0cbe077"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.654383 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/150fee52-9a9b-47cf-aeaf-1699d0cbe077-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.654421 4763 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/150fee52-9a9b-47cf-aeaf-1699d0cbe077-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.654436 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8gkp\" (UniqueName: \"kubernetes.io/projected/150fee52-9a9b-47cf-aeaf-1699d0cbe077-kube-api-access-j8gkp\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.654447 4763 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/150fee52-9a9b-47cf-aeaf-1699d0cbe077-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.654458 4763 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/150fee52-9a9b-47cf-aeaf-1699d0cbe077-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.886439 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh"] Jan 31 15:16:24 crc kubenswrapper[4763]: E0131 15:16:24.887095 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="150fee52-9a9b-47cf-aeaf-1699d0cbe077" containerName="swift-ring-rebalance" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.887120 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="150fee52-9a9b-47cf-aeaf-1699d0cbe077" containerName="swift-ring-rebalance" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.887287 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="150fee52-9a9b-47cf-aeaf-1699d0cbe077" containerName="swift-ring-rebalance" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.887987 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.900482 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh"] Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.957490 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-etc-swift\") pod \"swift-ring-rebalance-debug-dzfgh\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.957584 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-ring-data-devices\") pod \"swift-ring-rebalance-debug-dzfgh\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.957618 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-swiftconf\") pod \"swift-ring-rebalance-debug-dzfgh\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.957640 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vflv\" (UniqueName: \"kubernetes.io/projected/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-kube-api-access-4vflv\") pod \"swift-ring-rebalance-debug-dzfgh\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.957664 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-scripts\") pod \"swift-ring-rebalance-debug-dzfgh\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.957803 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-dispersionconf\") pod \"swift-ring-rebalance-debug-dzfgh\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:25 crc kubenswrapper[4763]: I0131 15:16:25.020991 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5916ddcab4c821a804d1bbf18735ca780e3635e1775c3acaf28cc394d8be895" Jan 31 15:16:25 crc kubenswrapper[4763]: I0131 15:16:25.021034 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:25 crc kubenswrapper[4763]: I0131 15:16:25.058913 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-ring-data-devices\") pod \"swift-ring-rebalance-debug-dzfgh\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:25 crc kubenswrapper[4763]: I0131 15:16:25.059001 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-swiftconf\") pod \"swift-ring-rebalance-debug-dzfgh\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:25 crc kubenswrapper[4763]: I0131 15:16:25.059074 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vflv\" (UniqueName: \"kubernetes.io/projected/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-kube-api-access-4vflv\") pod \"swift-ring-rebalance-debug-dzfgh\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:25 crc kubenswrapper[4763]: I0131 15:16:25.059147 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-scripts\") pod \"swift-ring-rebalance-debug-dzfgh\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:25 crc kubenswrapper[4763]: I0131 15:16:25.059278 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-dispersionconf\") pod \"swift-ring-rebalance-debug-dzfgh\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:25 crc kubenswrapper[4763]: I0131 15:16:25.059359 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-etc-swift\") pod \"swift-ring-rebalance-debug-dzfgh\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:25 crc kubenswrapper[4763]: I0131 15:16:25.060293 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-etc-swift\") pod \"swift-ring-rebalance-debug-dzfgh\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:25 crc kubenswrapper[4763]: I0131 15:16:25.060827 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-ring-data-devices\") pod \"swift-ring-rebalance-debug-dzfgh\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:25 crc kubenswrapper[4763]: I0131 15:16:25.060947 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-scripts\") pod \"swift-ring-rebalance-debug-dzfgh\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:25 crc kubenswrapper[4763]: I0131 15:16:25.061868 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="150fee52-9a9b-47cf-aeaf-1699d0cbe077" path="/var/lib/kubelet/pods/150fee52-9a9b-47cf-aeaf-1699d0cbe077/volumes" Jan 31 15:16:25 crc kubenswrapper[4763]: I0131 15:16:25.065812 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-dispersionconf\") pod \"swift-ring-rebalance-debug-dzfgh\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:25 crc kubenswrapper[4763]: I0131 15:16:25.077810 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-swiftconf\") pod \"swift-ring-rebalance-debug-dzfgh\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:25 crc kubenswrapper[4763]: I0131 15:16:25.083055 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vflv\" (UniqueName: \"kubernetes.io/projected/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-kube-api-access-4vflv\") pod \"swift-ring-rebalance-debug-dzfgh\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:25 crc kubenswrapper[4763]: I0131 15:16:25.205556 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:25 crc kubenswrapper[4763]: I0131 15:16:25.724250 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh"] Jan 31 15:16:26 crc kubenswrapper[4763]: I0131 15:16:26.034380 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" event={"ID":"a9c426eb-2a62-47e9-aa4a-53a30fe02b82","Type":"ContainerStarted","Data":"a341e6df1a37494965c9886e4a13005e2a4f651428e838086726ac3163d9cf3e"} Jan 31 15:16:26 crc kubenswrapper[4763]: I0131 15:16:26.034448 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" event={"ID":"a9c426eb-2a62-47e9-aa4a-53a30fe02b82","Type":"ContainerStarted","Data":"eee7e3263b2dee12dc6fe7a42457ffbf6509bd6d4cff970117aec89f30e09e69"} Jan 31 15:16:26 crc kubenswrapper[4763]: I0131 15:16:26.072849 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" podStartSLOduration=2.072822614 podStartE2EDuration="2.072822614s" podCreationTimestamp="2026-01-31 15:16:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:16:26.063133406 +0000 UTC m=+1305.817871759" watchObservedRunningTime="2026-01-31 15:16:26.072822614 +0000 UTC m=+1305.827560947" Jan 31 15:16:28 crc kubenswrapper[4763]: I0131 15:16:28.049665 4763 generic.go:334] "Generic (PLEG): container finished" podID="a9c426eb-2a62-47e9-aa4a-53a30fe02b82" containerID="a341e6df1a37494965c9886e4a13005e2a4f651428e838086726ac3163d9cf3e" exitCode=0 Jan 31 15:16:28 crc kubenswrapper[4763]: I0131 15:16:28.049740 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" event={"ID":"a9c426eb-2a62-47e9-aa4a-53a30fe02b82","Type":"ContainerDied","Data":"a341e6df1a37494965c9886e4a13005e2a4f651428e838086726ac3163d9cf3e"} Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.330574 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.363029 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh"] Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.372639 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh"] Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.437133 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-ring-data-devices\") pod \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.437179 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-scripts\") pod \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.437236 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-swiftconf\") pod \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.437314 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-etc-swift\") pod \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.437341 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vflv\" (UniqueName: \"kubernetes.io/projected/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-kube-api-access-4vflv\") pod \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.437383 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-dispersionconf\") pod \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.437773 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "a9c426eb-2a62-47e9-aa4a-53a30fe02b82" (UID: "a9c426eb-2a62-47e9-aa4a-53a30fe02b82"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.438157 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "a9c426eb-2a62-47e9-aa4a-53a30fe02b82" (UID: "a9c426eb-2a62-47e9-aa4a-53a30fe02b82"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.438380 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.438399 4763 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.442874 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-kube-api-access-4vflv" (OuterVolumeSpecName: "kube-api-access-4vflv") pod "a9c426eb-2a62-47e9-aa4a-53a30fe02b82" (UID: "a9c426eb-2a62-47e9-aa4a-53a30fe02b82"). InnerVolumeSpecName "kube-api-access-4vflv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.461589 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "a9c426eb-2a62-47e9-aa4a-53a30fe02b82" (UID: "a9c426eb-2a62-47e9-aa4a-53a30fe02b82"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.475788 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-scripts" (OuterVolumeSpecName: "scripts") pod "a9c426eb-2a62-47e9-aa4a-53a30fe02b82" (UID: "a9c426eb-2a62-47e9-aa4a-53a30fe02b82"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.485372 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "a9c426eb-2a62-47e9-aa4a-53a30fe02b82" (UID: "a9c426eb-2a62-47e9-aa4a-53a30fe02b82"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.540087 4763 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.540134 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vflv\" (UniqueName: \"kubernetes.io/projected/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-kube-api-access-4vflv\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.540150 4763 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.540167 4763 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.068087 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eee7e3263b2dee12dc6fe7a42457ffbf6509bd6d4cff970117aec89f30e09e69" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.068183 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.524528 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-bthkw"] Jan 31 15:16:30 crc kubenswrapper[4763]: E0131 15:16:30.525035 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9c426eb-2a62-47e9-aa4a-53a30fe02b82" containerName="swift-ring-rebalance" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.525062 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9c426eb-2a62-47e9-aa4a-53a30fe02b82" containerName="swift-ring-rebalance" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.525336 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9c426eb-2a62-47e9-aa4a-53a30fe02b82" containerName="swift-ring-rebalance" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.526143 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.529862 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.529867 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.536019 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-bthkw"] Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.656837 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qcjx\" (UniqueName: \"kubernetes.io/projected/a43d57a1-ba43-46fb-a389-59805d5a576e-kube-api-access-4qcjx\") pod \"swift-ring-rebalance-debug-bthkw\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.657201 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a43d57a1-ba43-46fb-a389-59805d5a576e-swiftconf\") pod \"swift-ring-rebalance-debug-bthkw\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.657247 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a43d57a1-ba43-46fb-a389-59805d5a576e-etc-swift\") pod \"swift-ring-rebalance-debug-bthkw\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.657303 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a43d57a1-ba43-46fb-a389-59805d5a576e-dispersionconf\") pod \"swift-ring-rebalance-debug-bthkw\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.657350 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a43d57a1-ba43-46fb-a389-59805d5a576e-ring-data-devices\") pod \"swift-ring-rebalance-debug-bthkw\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.657506 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a43d57a1-ba43-46fb-a389-59805d5a576e-scripts\") pod \"swift-ring-rebalance-debug-bthkw\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.759612 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qcjx\" (UniqueName: \"kubernetes.io/projected/a43d57a1-ba43-46fb-a389-59805d5a576e-kube-api-access-4qcjx\") pod \"swift-ring-rebalance-debug-bthkw\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.759672 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a43d57a1-ba43-46fb-a389-59805d5a576e-swiftconf\") pod \"swift-ring-rebalance-debug-bthkw\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.759736 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a43d57a1-ba43-46fb-a389-59805d5a576e-etc-swift\") pod \"swift-ring-rebalance-debug-bthkw\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.759788 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a43d57a1-ba43-46fb-a389-59805d5a576e-dispersionconf\") pod \"swift-ring-rebalance-debug-bthkw\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.759827 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a43d57a1-ba43-46fb-a389-59805d5a576e-ring-data-devices\") pod \"swift-ring-rebalance-debug-bthkw\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.759882 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a43d57a1-ba43-46fb-a389-59805d5a576e-scripts\") pod \"swift-ring-rebalance-debug-bthkw\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.760854 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a43d57a1-ba43-46fb-a389-59805d5a576e-etc-swift\") pod \"swift-ring-rebalance-debug-bthkw\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.760976 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a43d57a1-ba43-46fb-a389-59805d5a576e-scripts\") pod \"swift-ring-rebalance-debug-bthkw\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.761546 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a43d57a1-ba43-46fb-a389-59805d5a576e-ring-data-devices\") pod \"swift-ring-rebalance-debug-bthkw\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.767155 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a43d57a1-ba43-46fb-a389-59805d5a576e-swiftconf\") pod \"swift-ring-rebalance-debug-bthkw\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.767148 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a43d57a1-ba43-46fb-a389-59805d5a576e-dispersionconf\") pod \"swift-ring-rebalance-debug-bthkw\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.793844 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qcjx\" (UniqueName: \"kubernetes.io/projected/a43d57a1-ba43-46fb-a389-59805d5a576e-kube-api-access-4qcjx\") pod \"swift-ring-rebalance-debug-bthkw\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.859448 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:31 crc kubenswrapper[4763]: I0131 15:16:31.052990 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9c426eb-2a62-47e9-aa4a-53a30fe02b82" path="/var/lib/kubelet/pods/a9c426eb-2a62-47e9-aa4a-53a30fe02b82/volumes" Jan 31 15:16:31 crc kubenswrapper[4763]: I0131 15:16:31.303117 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-bthkw"] Jan 31 15:16:32 crc kubenswrapper[4763]: I0131 15:16:32.090650 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" event={"ID":"a43d57a1-ba43-46fb-a389-59805d5a576e","Type":"ContainerStarted","Data":"2c8676d586d95c99312bd225b1346101b96d75d6e5d4002c693abb76994fc014"} Jan 31 15:16:32 crc kubenswrapper[4763]: I0131 15:16:32.090784 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" event={"ID":"a43d57a1-ba43-46fb-a389-59805d5a576e","Type":"ContainerStarted","Data":"13d9bc5b8a0579a4aacab59b3629dad98778c85c9556b82cf66907dfc9566d29"} Jan 31 15:16:32 crc kubenswrapper[4763]: I0131 15:16:32.118883 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" podStartSLOduration=2.118859665 podStartE2EDuration="2.118859665s" podCreationTimestamp="2026-01-31 15:16:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:16:32.113138723 +0000 UTC m=+1311.867877036" watchObservedRunningTime="2026-01-31 15:16:32.118859665 +0000 UTC m=+1311.873597998" Jan 31 15:16:33 crc kubenswrapper[4763]: I0131 15:16:33.101707 4763 generic.go:334] "Generic (PLEG): container finished" podID="a43d57a1-ba43-46fb-a389-59805d5a576e" containerID="2c8676d586d95c99312bd225b1346101b96d75d6e5d4002c693abb76994fc014" exitCode=0 Jan 31 15:16:33 crc kubenswrapper[4763]: I0131 15:16:33.101764 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" event={"ID":"a43d57a1-ba43-46fb-a389-59805d5a576e","Type":"ContainerDied","Data":"2c8676d586d95c99312bd225b1346101b96d75d6e5d4002c693abb76994fc014"} Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.459788 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.506021 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-bthkw"] Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.514478 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-bthkw"] Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.619849 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a43d57a1-ba43-46fb-a389-59805d5a576e-scripts\") pod \"a43d57a1-ba43-46fb-a389-59805d5a576e\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.619915 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qcjx\" (UniqueName: \"kubernetes.io/projected/a43d57a1-ba43-46fb-a389-59805d5a576e-kube-api-access-4qcjx\") pod \"a43d57a1-ba43-46fb-a389-59805d5a576e\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.619985 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a43d57a1-ba43-46fb-a389-59805d5a576e-etc-swift\") pod \"a43d57a1-ba43-46fb-a389-59805d5a576e\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.620035 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a43d57a1-ba43-46fb-a389-59805d5a576e-swiftconf\") pod \"a43d57a1-ba43-46fb-a389-59805d5a576e\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.620116 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a43d57a1-ba43-46fb-a389-59805d5a576e-ring-data-devices\") pod \"a43d57a1-ba43-46fb-a389-59805d5a576e\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.620834 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a43d57a1-ba43-46fb-a389-59805d5a576e-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "a43d57a1-ba43-46fb-a389-59805d5a576e" (UID: "a43d57a1-ba43-46fb-a389-59805d5a576e"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.620942 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a43d57a1-ba43-46fb-a389-59805d5a576e-dispersionconf\") pod \"a43d57a1-ba43-46fb-a389-59805d5a576e\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.621239 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a43d57a1-ba43-46fb-a389-59805d5a576e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "a43d57a1-ba43-46fb-a389-59805d5a576e" (UID: "a43d57a1-ba43-46fb-a389-59805d5a576e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.621626 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a43d57a1-ba43-46fb-a389-59805d5a576e-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.621645 4763 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a43d57a1-ba43-46fb-a389-59805d5a576e-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.626340 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a43d57a1-ba43-46fb-a389-59805d5a576e-kube-api-access-4qcjx" (OuterVolumeSpecName: "kube-api-access-4qcjx") pod "a43d57a1-ba43-46fb-a389-59805d5a576e" (UID: "a43d57a1-ba43-46fb-a389-59805d5a576e"). InnerVolumeSpecName "kube-api-access-4qcjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.643960 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a43d57a1-ba43-46fb-a389-59805d5a576e-scripts" (OuterVolumeSpecName: "scripts") pod "a43d57a1-ba43-46fb-a389-59805d5a576e" (UID: "a43d57a1-ba43-46fb-a389-59805d5a576e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.652903 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a43d57a1-ba43-46fb-a389-59805d5a576e-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "a43d57a1-ba43-46fb-a389-59805d5a576e" (UID: "a43d57a1-ba43-46fb-a389-59805d5a576e"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.653769 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a43d57a1-ba43-46fb-a389-59805d5a576e-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "a43d57a1-ba43-46fb-a389-59805d5a576e" (UID: "a43d57a1-ba43-46fb-a389-59805d5a576e"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.722877 4763 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a43d57a1-ba43-46fb-a389-59805d5a576e-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.722921 4763 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a43d57a1-ba43-46fb-a389-59805d5a576e-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.722930 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qcjx\" (UniqueName: \"kubernetes.io/projected/a43d57a1-ba43-46fb-a389-59805d5a576e-kube-api-access-4qcjx\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.722941 4763 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a43d57a1-ba43-46fb-a389-59805d5a576e-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.050480 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a43d57a1-ba43-46fb-a389-59805d5a576e" path="/var/lib/kubelet/pods/a43d57a1-ba43-46fb-a389-59805d5a576e/volumes" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.117170 4763 scope.go:117] "RemoveContainer" containerID="2c8676d586d95c99312bd225b1346101b96d75d6e5d4002c693abb76994fc014" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.117192 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.689186 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4"] Jan 31 15:16:35 crc kubenswrapper[4763]: E0131 15:16:35.689969 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a43d57a1-ba43-46fb-a389-59805d5a576e" containerName="swift-ring-rebalance" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.689993 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="a43d57a1-ba43-46fb-a389-59805d5a576e" containerName="swift-ring-rebalance" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.690258 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="a43d57a1-ba43-46fb-a389-59805d5a576e" containerName="swift-ring-rebalance" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.690924 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.694890 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.697275 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.706462 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4"] Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.839075 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7dec991f-9426-402a-8f83-8547257d2b30-ring-data-devices\") pod \"swift-ring-rebalance-debug-8w4v4\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.839192 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-496xv\" (UniqueName: \"kubernetes.io/projected/7dec991f-9426-402a-8f83-8547257d2b30-kube-api-access-496xv\") pod \"swift-ring-rebalance-debug-8w4v4\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.839239 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7dec991f-9426-402a-8f83-8547257d2b30-dispersionconf\") pod \"swift-ring-rebalance-debug-8w4v4\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.839270 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7dec991f-9426-402a-8f83-8547257d2b30-swiftconf\") pod \"swift-ring-rebalance-debug-8w4v4\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.839413 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7dec991f-9426-402a-8f83-8547257d2b30-scripts\") pod \"swift-ring-rebalance-debug-8w4v4\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.839486 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7dec991f-9426-402a-8f83-8547257d2b30-etc-swift\") pod \"swift-ring-rebalance-debug-8w4v4\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.941481 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7dec991f-9426-402a-8f83-8547257d2b30-scripts\") pod \"swift-ring-rebalance-debug-8w4v4\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.941592 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7dec991f-9426-402a-8f83-8547257d2b30-etc-swift\") pod \"swift-ring-rebalance-debug-8w4v4\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.941662 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7dec991f-9426-402a-8f83-8547257d2b30-ring-data-devices\") pod \"swift-ring-rebalance-debug-8w4v4\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.941767 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-496xv\" (UniqueName: \"kubernetes.io/projected/7dec991f-9426-402a-8f83-8547257d2b30-kube-api-access-496xv\") pod \"swift-ring-rebalance-debug-8w4v4\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.941812 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7dec991f-9426-402a-8f83-8547257d2b30-dispersionconf\") pod \"swift-ring-rebalance-debug-8w4v4\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.941845 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7dec991f-9426-402a-8f83-8547257d2b30-swiftconf\") pod \"swift-ring-rebalance-debug-8w4v4\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.942087 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7dec991f-9426-402a-8f83-8547257d2b30-etc-swift\") pod \"swift-ring-rebalance-debug-8w4v4\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.943168 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7dec991f-9426-402a-8f83-8547257d2b30-scripts\") pod \"swift-ring-rebalance-debug-8w4v4\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.944183 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7dec991f-9426-402a-8f83-8547257d2b30-ring-data-devices\") pod \"swift-ring-rebalance-debug-8w4v4\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.946404 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7dec991f-9426-402a-8f83-8547257d2b30-dispersionconf\") pod \"swift-ring-rebalance-debug-8w4v4\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.946623 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7dec991f-9426-402a-8f83-8547257d2b30-swiftconf\") pod \"swift-ring-rebalance-debug-8w4v4\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.966609 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-496xv\" (UniqueName: \"kubernetes.io/projected/7dec991f-9426-402a-8f83-8547257d2b30-kube-api-access-496xv\") pod \"swift-ring-rebalance-debug-8w4v4\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:36 crc kubenswrapper[4763]: I0131 15:16:36.025678 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:36 crc kubenswrapper[4763]: I0131 15:16:36.493112 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4"] Jan 31 15:16:37 crc kubenswrapper[4763]: I0131 15:16:37.163020 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" event={"ID":"7dec991f-9426-402a-8f83-8547257d2b30","Type":"ContainerStarted","Data":"7de2f6d754dc155c1220dbc7207385fffee69382d5745f470315b5df89030e55"} Jan 31 15:16:37 crc kubenswrapper[4763]: I0131 15:16:37.163334 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" event={"ID":"7dec991f-9426-402a-8f83-8547257d2b30","Type":"ContainerStarted","Data":"099d7ea2b86fce3db22caaf0c1563ff510b9caeb509987782a85e05d35b47aab"} Jan 31 15:16:37 crc kubenswrapper[4763]: I0131 15:16:37.185511 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" podStartSLOduration=2.185494953 podStartE2EDuration="2.185494953s" podCreationTimestamp="2026-01-31 15:16:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:16:37.179593116 +0000 UTC m=+1316.934331409" watchObservedRunningTime="2026-01-31 15:16:37.185494953 +0000 UTC m=+1316.940233256" Jan 31 15:16:38 crc kubenswrapper[4763]: I0131 15:16:38.176722 4763 generic.go:334] "Generic (PLEG): container finished" podID="7dec991f-9426-402a-8f83-8547257d2b30" containerID="7de2f6d754dc155c1220dbc7207385fffee69382d5745f470315b5df89030e55" exitCode=0 Jan 31 15:16:38 crc kubenswrapper[4763]: I0131 15:16:38.176769 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" event={"ID":"7dec991f-9426-402a-8f83-8547257d2b30","Type":"ContainerDied","Data":"7de2f6d754dc155c1220dbc7207385fffee69382d5745f470315b5df89030e55"} Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.472362 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.522664 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4"] Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.532188 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4"] Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.606485 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7dec991f-9426-402a-8f83-8547257d2b30-swiftconf\") pod \"7dec991f-9426-402a-8f83-8547257d2b30\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.606549 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7dec991f-9426-402a-8f83-8547257d2b30-dispersionconf\") pod \"7dec991f-9426-402a-8f83-8547257d2b30\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.606573 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7dec991f-9426-402a-8f83-8547257d2b30-ring-data-devices\") pod \"7dec991f-9426-402a-8f83-8547257d2b30\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.606763 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7dec991f-9426-402a-8f83-8547257d2b30-etc-swift\") pod \"7dec991f-9426-402a-8f83-8547257d2b30\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.606793 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7dec991f-9426-402a-8f83-8547257d2b30-scripts\") pod \"7dec991f-9426-402a-8f83-8547257d2b30\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.606818 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-496xv\" (UniqueName: \"kubernetes.io/projected/7dec991f-9426-402a-8f83-8547257d2b30-kube-api-access-496xv\") pod \"7dec991f-9426-402a-8f83-8547257d2b30\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.611894 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dec991f-9426-402a-8f83-8547257d2b30-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "7dec991f-9426-402a-8f83-8547257d2b30" (UID: "7dec991f-9426-402a-8f83-8547257d2b30"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.611968 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.612513 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="account-server" containerID="cri-o://9ce27e519b496e795e29d01f6e35ba0dbe740066395319ccd8400796bf4190cb" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.612631 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="container-updater" containerID="cri-o://afce4d6edbdd41898c2d05bcfd465a50bf7fbe4f924d1536d022f9b7bde6cc4a" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.612675 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="container-auditor" containerID="cri-o://f10d79c1c2a03423d0f8e9bea742ee0f71b0de111c6c2b0530f79b1bb13590fe" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.612729 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="container-replicator" containerID="cri-o://483ce81d4f59c1977d7f239e53747fee35232b1656fc7602c1a77108638d9db2" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.612664 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-server" containerID="cri-o://1edb0b7f2c7c7855ec59a3e46c12b4d9f8f2128670389c78068047481b466406" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.612915 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-expirer" containerID="cri-o://1200ff3e3aca2458c95ff5ac8deaa7609c141f64452aa6fc569865a2a07ba594" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.612775 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="container-server" containerID="cri-o://400f9226fbc81108ff15e964994bc02c7044985a7b88bdcfc5b9ec38e0ec0971" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.613107 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-updater" containerID="cri-o://3fc180b723e4c8f13af50f16bf6dd031dc4b30e6837b659d53f2126db991cdef" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.612813 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="account-replicator" containerID="cri-o://619d6d7dc92d20efbdc5a9e71ba3bb545609a50170ca45c2f90260cb8b92859d" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.613211 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-auditor" containerID="cri-o://73ce9e2e7676ac3fba63ec19a18765b3c9e2f7127b13b82b0a8da016af057bd4" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.613227 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="swift-recon-cron" containerID="cri-o://c8c0e78ca1151a94155419bd0418b77c66fc386384f22eb1e0e7dc241590a78a" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.613266 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-replicator" containerID="cri-o://4125ef4f4111a2faea3497473d4e28a723f901956d45be9b6df5be593bcb094a" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.612799 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="account-auditor" containerID="cri-o://3343428eafc3c7fe632334cb15bc487e6a2cf486419894cbf84ef0039e8d4a0d" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.612785 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="account-reaper" containerID="cri-o://cde84086c356956bf7c7e9e311a2b5c0dfd023d642e5c211c956f0606f3b6634" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.615022 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dec991f-9426-402a-8f83-8547257d2b30-kube-api-access-496xv" (OuterVolumeSpecName: "kube-api-access-496xv") pod "7dec991f-9426-402a-8f83-8547257d2b30" (UID: "7dec991f-9426-402a-8f83-8547257d2b30"). InnerVolumeSpecName "kube-api-access-496xv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.618782 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="rsync" containerID="cri-o://a2e002a6a1e3984c24d1cacf4ede42929c80f438b7e4fd3d25cc84706f472d24" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.619446 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dec991f-9426-402a-8f83-8547257d2b30-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "7dec991f-9426-402a-8f83-8547257d2b30" (UID: "7dec991f-9426-402a-8f83-8547257d2b30"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.655387 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dec991f-9426-402a-8f83-8547257d2b30-scripts" (OuterVolumeSpecName: "scripts") pod "7dec991f-9426-402a-8f83-8547257d2b30" (UID: "7dec991f-9426-402a-8f83-8547257d2b30"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.681908 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.682421 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="account-server" containerID="cri-o://2a92b2bd4fc8d6dca2fcd21333d8ea9dc1bea550c48e4c4023603133141572ad" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.682885 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="container-updater" containerID="cri-o://18e785ee82660570c7a3c1f8a825fe58e5562959092c137ca7e8ad12f67b2cdf" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.682951 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="swift-recon-cron" containerID="cri-o://36471f31ca52c785e38c6b2b66ccd5857f6ef478d9ab8974c38189fbf0e27a7c" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.682995 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="rsync" containerID="cri-o://a193fbbc6d4c1b876140e857534a3e3476054468d680fb5032ef95fc43449eec" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.683024 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-expirer" containerID="cri-o://aa4453d984b8623efdeb6c14caeec9684c9789a6bbf3f070fda2ae53d211bc67" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.683052 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-updater" containerID="cri-o://9716359f4da3a71b9390a156a58d83e5742038cd7c70d2aaa6dddd57c4c7402f" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.683081 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-auditor" containerID="cri-o://4007498dac8fd3b8123d95c0037cc87ebfb6676c6920014087622d824443355a" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.683106 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-replicator" containerID="cri-o://811bfef1f45e1c8bfb983ccdb9950299ee77652d3e6b07f11a1f38dcaa006989" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.683137 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-server" containerID="cri-o://433775c4538892b2b06557027ca728e6b8b86916941810cde9bf0aaa7cec78dd" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.683216 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="account-reaper" containerID="cri-o://539252f99bc3e6264ace92eb1a171fa552e64dc1e5ab28a67e1806e3665008b7" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.683251 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="container-auditor" containerID="cri-o://5de93fda3e4eeec09c3e0cab0c53f8bd9bc5a576f08e2f6a3e358bb501d0aee7" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.683283 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="container-replicator" containerID="cri-o://46bb08319fc51c2ac2e298b3d88809c97e2206ce13b2762bb97ee19fa37761d9" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.683311 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="container-server" containerID="cri-o://d21b593c360d355cf6f976e7a33dd5c9a7af1da589078440cac056c5b3195552" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.683351 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="account-replicator" containerID="cri-o://6436423e21963e2b944d7661270d579b854013f8d01c88fd036a9b4a15a61846" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.683387 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="account-auditor" containerID="cri-o://8743be8babd948dd0c5cadcbb327888a64b81d91784dcd88e4edc80760703747" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.691490 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.692402 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="account-server" containerID="cri-o://0c8b0727815b22780ef52e9340937734001c60d592a0efa293171a6a99f631b8" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.692395 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-expirer" containerID="cri-o://2939485c2d7229a2f4e693e47c132dadba16e2fdb364592f267d9a10abee7f3b" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.692483 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-updater" containerID="cri-o://4992aa25b0e802e8ef4afebdda60e6de87c86a4603281c770436bd0619a4396b" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.692538 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-auditor" containerID="cri-o://98598fa6e6e265c08f4b896f2e536f3ceb4a0027cac1512dcf4f464c4d6a0e2d" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.692639 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-replicator" containerID="cri-o://be15a349543095a208d3733f5743e24532655dd5a6a6458457ba577f54adc832" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.692678 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="swift-recon-cron" containerID="cri-o://bce9cd376705065c5f2f54f669ef4c9cccbd07ea487a0242cf40f637878b3c9e" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.692721 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-server" containerID="cri-o://92ad90a8f7a31ed2cf85f0757c71ef34da8f0c359347e86a4bcef54ce85516a6" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.692746 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="rsync" containerID="cri-o://ec009865c52a8d1e41b5965f9b645455af51d67839966fb510adf5cdeeb90ab6" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.692766 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="container-updater" containerID="cri-o://fbf72198234fda54505fa079d75aa79a7acaa34fd24b31ddd293f0aae93e0c93" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.692833 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="container-auditor" containerID="cri-o://9a8de8d27e0778796c6a4baff03a1d1d922e9ae78f97728f4ae4aaf7fa341563" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.692866 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="account-reaper" containerID="cri-o://6bbf9c9a63d33d2c6b6e0178b1ade00637993606e119582492e66b6a24013ab8" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.692903 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="container-replicator" containerID="cri-o://86c85c15b961f6795b6b5dc674ffc310871f76d3dbb1eb7225326dc937e30e64" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.692907 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="account-auditor" containerID="cri-o://710120310f88e5c722eacaf49bc091b604c9117b11eb65ce63fd698be02b6699" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.692935 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="container-server" containerID="cri-o://1351528e398244f79dc02492c56eb7c0b5cf97f5af4ab495fdf34673f2be4e3a" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.692964 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="account-replicator" containerID="cri-o://99ceb2ba2010e4648343e1238ccdc00c944820acf210df397fc1aa1ebd073a62" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.700657 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-cw985"] Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.708787 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-cw985"] Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.709921 4763 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7dec991f-9426-402a-8f83-8547257d2b30-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.709951 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7dec991f-9426-402a-8f83-8547257d2b30-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.709968 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-496xv\" (UniqueName: \"kubernetes.io/projected/7dec991f-9426-402a-8f83-8547257d2b30-kube-api-access-496xv\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.709978 4763 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7dec991f-9426-402a-8f83-8547257d2b30-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.720172 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj"] Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.720461 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" podUID="a76b9c98-a93e-4935-947c-9ecf237b7a97" containerName="proxy-httpd" containerID="cri-o://2bbe980cdd23a298b995253469039eca5a71af206ca6fa11e2f7a2a5c08d968c" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.720899 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" podUID="a76b9c98-a93e-4935-947c-9ecf237b7a97" containerName="proxy-server" containerID="cri-o://feb9eca2f3460795772517c1281dac16f555d0d80bd9ba799ff73f601b5b71c8" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.734974 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dec991f-9426-402a-8f83-8547257d2b30-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "7dec991f-9426-402a-8f83-8547257d2b30" (UID: "7dec991f-9426-402a-8f83-8547257d2b30"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.752243 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dec991f-9426-402a-8f83-8547257d2b30-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "7dec991f-9426-402a-8f83-8547257d2b30" (UID: "7dec991f-9426-402a-8f83-8547257d2b30"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.814894 4763 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7dec991f-9426-402a-8f83-8547257d2b30-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.814941 4763 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7dec991f-9426-402a-8f83-8547257d2b30-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:40 crc kubenswrapper[4763]: E0131 15:16:40.149798 4763 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6ace0ac_a7c8_4413_90ee_53d6bf699eef.slice/crio-conmon-0c8b0727815b22780ef52e9340937734001c60d592a0efa293171a6a99f631b8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d1baecf_2b7e_418b_8c64_95b6551f365e.slice/crio-conmon-a2e002a6a1e3984c24d1cacf4ede42929c80f438b7e4fd3d25cc84706f472d24.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda76b9c98_a93e_4935_947c_9ecf237b7a97.slice/crio-conmon-feb9eca2f3460795772517c1281dac16f555d0d80bd9ba799ff73f601b5b71c8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d1baecf_2b7e_418b_8c64_95b6551f365e.slice/crio-a2e002a6a1e3984c24d1cacf4ede42929c80f438b7e4fd3d25cc84706f472d24.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a5bfb32_7eae_4b04_9aee_d0873f0c93b9.slice/crio-a193fbbc6d4c1b876140e857534a3e3476054468d680fb5032ef95fc43449eec.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6ace0ac_a7c8_4413_90ee_53d6bf699eef.slice/crio-conmon-92ad90a8f7a31ed2cf85f0757c71ef34da8f0c359347e86a4bcef54ce85516a6.scope\": RecentStats: unable to find data in memory cache]" Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209233 4763 generic.go:334] "Generic (PLEG): container finished" podID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerID="a193fbbc6d4c1b876140e857534a3e3476054468d680fb5032ef95fc43449eec" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209265 4763 generic.go:334] "Generic (PLEG): container finished" podID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerID="aa4453d984b8623efdeb6c14caeec9684c9789a6bbf3f070fda2ae53d211bc67" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209275 4763 generic.go:334] "Generic (PLEG): container finished" podID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerID="9716359f4da3a71b9390a156a58d83e5742038cd7c70d2aaa6dddd57c4c7402f" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209283 4763 generic.go:334] "Generic (PLEG): container finished" podID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerID="4007498dac8fd3b8123d95c0037cc87ebfb6676c6920014087622d824443355a" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209290 4763 generic.go:334] "Generic (PLEG): container finished" podID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerID="811bfef1f45e1c8bfb983ccdb9950299ee77652d3e6b07f11a1f38dcaa006989" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209298 4763 generic.go:334] "Generic (PLEG): container finished" podID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerID="18e785ee82660570c7a3c1f8a825fe58e5562959092c137ca7e8ad12f67b2cdf" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209304 4763 generic.go:334] "Generic (PLEG): container finished" podID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerID="5de93fda3e4eeec09c3e0cab0c53f8bd9bc5a576f08e2f6a3e358bb501d0aee7" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209313 4763 generic.go:334] "Generic (PLEG): container finished" podID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerID="46bb08319fc51c2ac2e298b3d88809c97e2206ce13b2762bb97ee19fa37761d9" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209319 4763 generic.go:334] "Generic (PLEG): container finished" podID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerID="539252f99bc3e6264ace92eb1a171fa552e64dc1e5ab28a67e1806e3665008b7" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209325 4763 generic.go:334] "Generic (PLEG): container finished" podID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerID="8743be8babd948dd0c5cadcbb327888a64b81d91784dcd88e4edc80760703747" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209332 4763 generic.go:334] "Generic (PLEG): container finished" podID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerID="6436423e21963e2b944d7661270d579b854013f8d01c88fd036a9b4a15a61846" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209269 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerDied","Data":"a193fbbc6d4c1b876140e857534a3e3476054468d680fb5032ef95fc43449eec"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209369 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerDied","Data":"aa4453d984b8623efdeb6c14caeec9684c9789a6bbf3f070fda2ae53d211bc67"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209386 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerDied","Data":"9716359f4da3a71b9390a156a58d83e5742038cd7c70d2aaa6dddd57c4c7402f"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209396 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerDied","Data":"4007498dac8fd3b8123d95c0037cc87ebfb6676c6920014087622d824443355a"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209404 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerDied","Data":"811bfef1f45e1c8bfb983ccdb9950299ee77652d3e6b07f11a1f38dcaa006989"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209412 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerDied","Data":"18e785ee82660570c7a3c1f8a825fe58e5562959092c137ca7e8ad12f67b2cdf"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209420 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerDied","Data":"5de93fda3e4eeec09c3e0cab0c53f8bd9bc5a576f08e2f6a3e358bb501d0aee7"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209428 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerDied","Data":"46bb08319fc51c2ac2e298b3d88809c97e2206ce13b2762bb97ee19fa37761d9"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209439 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerDied","Data":"539252f99bc3e6264ace92eb1a171fa552e64dc1e5ab28a67e1806e3665008b7"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209447 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerDied","Data":"8743be8babd948dd0c5cadcbb327888a64b81d91784dcd88e4edc80760703747"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209454 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerDied","Data":"6436423e21963e2b944d7661270d579b854013f8d01c88fd036a9b4a15a61846"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.216980 4763 generic.go:334] "Generic (PLEG): container finished" podID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerID="a2e002a6a1e3984c24d1cacf4ede42929c80f438b7e4fd3d25cc84706f472d24" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217004 4763 generic.go:334] "Generic (PLEG): container finished" podID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerID="1200ff3e3aca2458c95ff5ac8deaa7609c141f64452aa6fc569865a2a07ba594" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217010 4763 generic.go:334] "Generic (PLEG): container finished" podID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerID="3fc180b723e4c8f13af50f16bf6dd031dc4b30e6837b659d53f2126db991cdef" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217017 4763 generic.go:334] "Generic (PLEG): container finished" podID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerID="73ce9e2e7676ac3fba63ec19a18765b3c9e2f7127b13b82b0a8da016af057bd4" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217023 4763 generic.go:334] "Generic (PLEG): container finished" podID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerID="4125ef4f4111a2faea3497473d4e28a723f901956d45be9b6df5be593bcb094a" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217028 4763 generic.go:334] "Generic (PLEG): container finished" podID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerID="afce4d6edbdd41898c2d05bcfd465a50bf7fbe4f924d1536d022f9b7bde6cc4a" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217035 4763 generic.go:334] "Generic (PLEG): container finished" podID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerID="f10d79c1c2a03423d0f8e9bea742ee0f71b0de111c6c2b0530f79b1bb13590fe" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217041 4763 generic.go:334] "Generic (PLEG): container finished" podID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerID="483ce81d4f59c1977d7f239e53747fee35232b1656fc7602c1a77108638d9db2" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217047 4763 generic.go:334] "Generic (PLEG): container finished" podID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerID="400f9226fbc81108ff15e964994bc02c7044985a7b88bdcfc5b9ec38e0ec0971" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217053 4763 generic.go:334] "Generic (PLEG): container finished" podID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerID="cde84086c356956bf7c7e9e311a2b5c0dfd023d642e5c211c956f0606f3b6634" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217058 4763 generic.go:334] "Generic (PLEG): container finished" podID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerID="3343428eafc3c7fe632334cb15bc487e6a2cf486419894cbf84ef0039e8d4a0d" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217064 4763 generic.go:334] "Generic (PLEG): container finished" podID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerID="619d6d7dc92d20efbdc5a9e71ba3bb545609a50170ca45c2f90260cb8b92859d" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217070 4763 generic.go:334] "Generic (PLEG): container finished" podID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerID="9ce27e519b496e795e29d01f6e35ba0dbe740066395319ccd8400796bf4190cb" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217113 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerDied","Data":"a2e002a6a1e3984c24d1cacf4ede42929c80f438b7e4fd3d25cc84706f472d24"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217137 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerDied","Data":"1200ff3e3aca2458c95ff5ac8deaa7609c141f64452aa6fc569865a2a07ba594"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217147 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerDied","Data":"3fc180b723e4c8f13af50f16bf6dd031dc4b30e6837b659d53f2126db991cdef"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217155 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerDied","Data":"73ce9e2e7676ac3fba63ec19a18765b3c9e2f7127b13b82b0a8da016af057bd4"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217163 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerDied","Data":"4125ef4f4111a2faea3497473d4e28a723f901956d45be9b6df5be593bcb094a"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217171 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerDied","Data":"afce4d6edbdd41898c2d05bcfd465a50bf7fbe4f924d1536d022f9b7bde6cc4a"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217179 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerDied","Data":"f10d79c1c2a03423d0f8e9bea742ee0f71b0de111c6c2b0530f79b1bb13590fe"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217188 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerDied","Data":"483ce81d4f59c1977d7f239e53747fee35232b1656fc7602c1a77108638d9db2"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217196 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerDied","Data":"400f9226fbc81108ff15e964994bc02c7044985a7b88bdcfc5b9ec38e0ec0971"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217205 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerDied","Data":"cde84086c356956bf7c7e9e311a2b5c0dfd023d642e5c211c956f0606f3b6634"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217213 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerDied","Data":"3343428eafc3c7fe632334cb15bc487e6a2cf486419894cbf84ef0039e8d4a0d"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217222 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerDied","Data":"619d6d7dc92d20efbdc5a9e71ba3bb545609a50170ca45c2f90260cb8b92859d"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217229 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerDied","Data":"9ce27e519b496e795e29d01f6e35ba0dbe740066395319ccd8400796bf4190cb"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239457 4763 generic.go:334] "Generic (PLEG): container finished" podID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerID="2939485c2d7229a2f4e693e47c132dadba16e2fdb364592f267d9a10abee7f3b" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239677 4763 generic.go:334] "Generic (PLEG): container finished" podID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerID="4992aa25b0e802e8ef4afebdda60e6de87c86a4603281c770436bd0619a4396b" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239711 4763 generic.go:334] "Generic (PLEG): container finished" podID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerID="98598fa6e6e265c08f4b896f2e536f3ceb4a0027cac1512dcf4f464c4d6a0e2d" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239719 4763 generic.go:334] "Generic (PLEG): container finished" podID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerID="be15a349543095a208d3733f5743e24532655dd5a6a6458457ba577f54adc832" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239726 4763 generic.go:334] "Generic (PLEG): container finished" podID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerID="92ad90a8f7a31ed2cf85f0757c71ef34da8f0c359347e86a4bcef54ce85516a6" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239732 4763 generic.go:334] "Generic (PLEG): container finished" podID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerID="fbf72198234fda54505fa079d75aa79a7acaa34fd24b31ddd293f0aae93e0c93" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239793 4763 generic.go:334] "Generic (PLEG): container finished" podID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerID="9a8de8d27e0778796c6a4baff03a1d1d922e9ae78f97728f4ae4aaf7fa341563" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239799 4763 generic.go:334] "Generic (PLEG): container finished" podID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerID="86c85c15b961f6795b6b5dc674ffc310871f76d3dbb1eb7225326dc937e30e64" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239805 4763 generic.go:334] "Generic (PLEG): container finished" podID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerID="6bbf9c9a63d33d2c6b6e0178b1ade00637993606e119582492e66b6a24013ab8" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239811 4763 generic.go:334] "Generic (PLEG): container finished" podID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerID="710120310f88e5c722eacaf49bc091b604c9117b11eb65ce63fd698be02b6699" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239817 4763 generic.go:334] "Generic (PLEG): container finished" podID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerID="99ceb2ba2010e4648343e1238ccdc00c944820acf210df397fc1aa1ebd073a62" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239822 4763 generic.go:334] "Generic (PLEG): container finished" podID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerID="0c8b0727815b22780ef52e9340937734001c60d592a0efa293171a6a99f631b8" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239754 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerDied","Data":"2939485c2d7229a2f4e693e47c132dadba16e2fdb364592f267d9a10abee7f3b"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239875 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerDied","Data":"4992aa25b0e802e8ef4afebdda60e6de87c86a4603281c770436bd0619a4396b"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239885 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerDied","Data":"98598fa6e6e265c08f4b896f2e536f3ceb4a0027cac1512dcf4f464c4d6a0e2d"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239896 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerDied","Data":"be15a349543095a208d3733f5743e24532655dd5a6a6458457ba577f54adc832"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239904 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerDied","Data":"92ad90a8f7a31ed2cf85f0757c71ef34da8f0c359347e86a4bcef54ce85516a6"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239913 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerDied","Data":"fbf72198234fda54505fa079d75aa79a7acaa34fd24b31ddd293f0aae93e0c93"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239920 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerDied","Data":"9a8de8d27e0778796c6a4baff03a1d1d922e9ae78f97728f4ae4aaf7fa341563"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239928 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerDied","Data":"86c85c15b961f6795b6b5dc674ffc310871f76d3dbb1eb7225326dc937e30e64"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239936 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerDied","Data":"6bbf9c9a63d33d2c6b6e0178b1ade00637993606e119582492e66b6a24013ab8"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239944 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerDied","Data":"710120310f88e5c722eacaf49bc091b604c9117b11eb65ce63fd698be02b6699"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239952 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerDied","Data":"99ceb2ba2010e4648343e1238ccdc00c944820acf210df397fc1aa1ebd073a62"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239961 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerDied","Data":"0c8b0727815b22780ef52e9340937734001c60d592a0efa293171a6a99f631b8"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.244959 4763 generic.go:334] "Generic (PLEG): container finished" podID="a76b9c98-a93e-4935-947c-9ecf237b7a97" containerID="feb9eca2f3460795772517c1281dac16f555d0d80bd9ba799ff73f601b5b71c8" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.244996 4763 generic.go:334] "Generic (PLEG): container finished" podID="a76b9c98-a93e-4935-947c-9ecf237b7a97" containerID="2bbe980cdd23a298b995253469039eca5a71af206ca6fa11e2f7a2a5c08d968c" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.245088 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" event={"ID":"a76b9c98-a93e-4935-947c-9ecf237b7a97","Type":"ContainerDied","Data":"feb9eca2f3460795772517c1281dac16f555d0d80bd9ba799ff73f601b5b71c8"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.245126 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" event={"ID":"a76b9c98-a93e-4935-947c-9ecf237b7a97","Type":"ContainerDied","Data":"2bbe980cdd23a298b995253469039eca5a71af206ca6fa11e2f7a2a5c08d968c"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.247133 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="099d7ea2b86fce3db22caaf0c1563ff510b9caeb509987782a85e05d35b47aab" Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.247204 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.399561 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.529241 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a76b9c98-a93e-4935-947c-9ecf237b7a97-log-httpd\") pod \"a76b9c98-a93e-4935-947c-9ecf237b7a97\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.529344 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f758c\" (UniqueName: \"kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-kube-api-access-f758c\") pod \"a76b9c98-a93e-4935-947c-9ecf237b7a97\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.529435 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a76b9c98-a93e-4935-947c-9ecf237b7a97-config-data\") pod \"a76b9c98-a93e-4935-947c-9ecf237b7a97\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.529482 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a76b9c98-a93e-4935-947c-9ecf237b7a97-run-httpd\") pod \"a76b9c98-a93e-4935-947c-9ecf237b7a97\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.529530 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift\") pod \"a76b9c98-a93e-4935-947c-9ecf237b7a97\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.529865 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a76b9c98-a93e-4935-947c-9ecf237b7a97-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a76b9c98-a93e-4935-947c-9ecf237b7a97" (UID: "a76b9c98-a93e-4935-947c-9ecf237b7a97"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.529875 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a76b9c98-a93e-4935-947c-9ecf237b7a97-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a76b9c98-a93e-4935-947c-9ecf237b7a97" (UID: "a76b9c98-a93e-4935-947c-9ecf237b7a97"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.530399 4763 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a76b9c98-a93e-4935-947c-9ecf237b7a97-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.530422 4763 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a76b9c98-a93e-4935-947c-9ecf237b7a97-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.533438 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-kube-api-access-f758c" (OuterVolumeSpecName: "kube-api-access-f758c") pod "a76b9c98-a93e-4935-947c-9ecf237b7a97" (UID: "a76b9c98-a93e-4935-947c-9ecf237b7a97"). InnerVolumeSpecName "kube-api-access-f758c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.534203 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "a76b9c98-a93e-4935-947c-9ecf237b7a97" (UID: "a76b9c98-a93e-4935-947c-9ecf237b7a97"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.577261 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a76b9c98-a93e-4935-947c-9ecf237b7a97-config-data" (OuterVolumeSpecName: "config-data") pod "a76b9c98-a93e-4935-947c-9ecf237b7a97" (UID: "a76b9c98-a93e-4935-947c-9ecf237b7a97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.632097 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f758c\" (UniqueName: \"kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-kube-api-access-f758c\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.632136 4763 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a76b9c98-a93e-4935-947c-9ecf237b7a97-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.632146 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.059809 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a85fc7e-226b-498d-9156-c4a5ecf075b9" path="/var/lib/kubelet/pods/5a85fc7e-226b-498d-9156-c4a5ecf075b9/volumes" Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.063624 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dec991f-9426-402a-8f83-8547257d2b30" path="/var/lib/kubelet/pods/7dec991f-9426-402a-8f83-8547257d2b30/volumes" Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.259919 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.260450 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" event={"ID":"a76b9c98-a93e-4935-947c-9ecf237b7a97","Type":"ContainerDied","Data":"d3926134dfc43380c3c616215cd0f1de55c1eb370aa433868d96185fd6990644"} Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.260560 4763 scope.go:117] "RemoveContainer" containerID="feb9eca2f3460795772517c1281dac16f555d0d80bd9ba799ff73f601b5b71c8" Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.274906 4763 generic.go:334] "Generic (PLEG): container finished" podID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerID="433775c4538892b2b06557027ca728e6b8b86916941810cde9bf0aaa7cec78dd" exitCode=0 Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.274955 4763 generic.go:334] "Generic (PLEG): container finished" podID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerID="d21b593c360d355cf6f976e7a33dd5c9a7af1da589078440cac056c5b3195552" exitCode=0 Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.274974 4763 generic.go:334] "Generic (PLEG): container finished" podID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerID="2a92b2bd4fc8d6dca2fcd21333d8ea9dc1bea550c48e4c4023603133141572ad" exitCode=0 Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.275009 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerDied","Data":"433775c4538892b2b06557027ca728e6b8b86916941810cde9bf0aaa7cec78dd"} Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.275081 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerDied","Data":"d21b593c360d355cf6f976e7a33dd5c9a7af1da589078440cac056c5b3195552"} Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.275112 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerDied","Data":"2a92b2bd4fc8d6dca2fcd21333d8ea9dc1bea550c48e4c4023603133141572ad"} Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.291928 4763 generic.go:334] "Generic (PLEG): container finished" podID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerID="1edb0b7f2c7c7855ec59a3e46c12b4d9f8f2128670389c78068047481b466406" exitCode=0 Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.292029 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerDied","Data":"1edb0b7f2c7c7855ec59a3e46c12b4d9f8f2128670389c78068047481b466406"} Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.298688 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj"] Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.301758 4763 generic.go:334] "Generic (PLEG): container finished" podID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerID="ec009865c52a8d1e41b5965f9b645455af51d67839966fb510adf5cdeeb90ab6" exitCode=0 Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.301783 4763 generic.go:334] "Generic (PLEG): container finished" podID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerID="1351528e398244f79dc02492c56eb7c0b5cf97f5af4ab495fdf34673f2be4e3a" exitCode=0 Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.301803 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerDied","Data":"ec009865c52a8d1e41b5965f9b645455af51d67839966fb510adf5cdeeb90ab6"} Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.301831 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerDied","Data":"1351528e398244f79dc02492c56eb7c0b5cf97f5af4ab495fdf34673f2be4e3a"} Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.302720 4763 scope.go:117] "RemoveContainer" containerID="2bbe980cdd23a298b995253469039eca5a71af206ca6fa11e2f7a2a5c08d968c" Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.309656 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj"] Jan 31 15:16:43 crc kubenswrapper[4763]: I0131 15:16:43.067055 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a76b9c98-a93e-4935-947c-9ecf237b7a97" path="/var/lib/kubelet/pods/a76b9c98-a93e-4935-947c-9ecf237b7a97/volumes" Jan 31 15:16:44 crc kubenswrapper[4763]: I0131 15:16:44.177795 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:16:44 crc kubenswrapper[4763]: I0131 15:16:44.177878 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:16:44 crc kubenswrapper[4763]: I0131 15:16:44.177936 4763 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 15:16:44 crc kubenswrapper[4763]: I0131 15:16:44.179316 4763 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ed9e66445ed11fb3e31f897826819c87a5cfa2eaf20ae073d2a90c461528b554"} pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 15:16:44 crc kubenswrapper[4763]: I0131 15:16:44.179823 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" containerID="cri-o://ed9e66445ed11fb3e31f897826819c87a5cfa2eaf20ae073d2a90c461528b554" gracePeriod=600 Jan 31 15:16:44 crc kubenswrapper[4763]: I0131 15:16:44.346118 4763 generic.go:334] "Generic (PLEG): container finished" podID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerID="ed9e66445ed11fb3e31f897826819c87a5cfa2eaf20ae073d2a90c461528b554" exitCode=0 Jan 31 15:16:44 crc kubenswrapper[4763]: I0131 15:16:44.346192 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerDied","Data":"ed9e66445ed11fb3e31f897826819c87a5cfa2eaf20ae073d2a90c461528b554"} Jan 31 15:16:44 crc kubenswrapper[4763]: I0131 15:16:44.346267 4763 scope.go:117] "RemoveContainer" containerID="f20a9db2f936557dee18355e3894646df84495361a6df22f393e9e76f8aebb8e" Jan 31 15:16:45 crc kubenswrapper[4763]: I0131 15:16:45.364477 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerStarted","Data":"b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053"} Jan 31 15:17:03 crc kubenswrapper[4763]: E0131 15:17:03.906463 4763 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/77ce0e0be23474712532b9234efc385243afef81f67985d754eda69b40d16bc7/diff" to get inode usage: stat /var/lib/containers/storage/overlay/77ce0e0be23474712532b9234efc385243afef81f67985d754eda69b40d16bc7/diff: no such file or directory, extraDiskErr: Jan 31 15:17:09 crc kubenswrapper[4763]: E0131 15:17:09.945544 4763 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6ace0ac_a7c8_4413_90ee_53d6bf699eef.slice/crio-bce9cd376705065c5f2f54f669ef4c9cccbd07ea487a0242cf40f637878b3c9e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6ace0ac_a7c8_4413_90ee_53d6bf699eef.slice/crio-conmon-bce9cd376705065c5f2f54f669ef4c9cccbd07ea487a0242cf40f637878b3c9e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a5bfb32_7eae_4b04_9aee_d0873f0c93b9.slice/crio-36471f31ca52c785e38c6b2b66ccd5857f6ef478d9ab8974c38189fbf0e27a7c.scope\": RecentStats: unable to find data in memory cache]" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.172985 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.184626 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.188707 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.244794 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift\") pod \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.244876 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift\") pod \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.244930 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8d1baecf-2b7e-418b-8c64-95b6551f365e-cache\") pod \"8d1baecf-2b7e-418b-8c64-95b6551f365e\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.244985 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8r45b\" (UniqueName: \"kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-kube-api-access-8r45b\") pod \"8d1baecf-2b7e-418b-8c64-95b6551f365e\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.245028 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-lock\") pod \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.245097 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-cache\") pod \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.245148 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9hz8\" (UniqueName: \"kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-kube-api-access-n9hz8\") pod \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.245218 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"8d1baecf-2b7e-418b-8c64-95b6551f365e\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.245262 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift\") pod \"8d1baecf-2b7e-418b-8c64-95b6551f365e\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.245320 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-cache\") pod \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.245349 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.245386 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-lock\") pod \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.245421 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8d1baecf-2b7e-418b-8c64-95b6551f365e-lock\") pod \"8d1baecf-2b7e-418b-8c64-95b6551f365e\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.245471 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp55l\" (UniqueName: \"kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-kube-api-access-sp55l\") pod \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.245498 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.248414 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d1baecf-2b7e-418b-8c64-95b6551f365e-lock" (OuterVolumeSpecName: "lock") pod "8d1baecf-2b7e-418b-8c64-95b6551f365e" (UID: "8d1baecf-2b7e-418b-8c64-95b6551f365e"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.249213 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-lock" (OuterVolumeSpecName: "lock") pod "e6ace0ac-a7c8-4413-90ee-53d6bf699eef" (UID: "e6ace0ac-a7c8-4413-90ee-53d6bf699eef"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.249295 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-cache" (OuterVolumeSpecName: "cache") pod "e6ace0ac-a7c8-4413-90ee-53d6bf699eef" (UID: "e6ace0ac-a7c8-4413-90ee-53d6bf699eef"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.250124 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-lock" (OuterVolumeSpecName: "lock") pod "0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" (UID: "0a5bfb32-7eae-4b04-9aee-d0873f0c93b9"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.252084 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-cache" (OuterVolumeSpecName: "cache") pod "0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" (UID: "0a5bfb32-7eae-4b04-9aee-d0873f0c93b9"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.253432 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "8d1baecf-2b7e-418b-8c64-95b6551f365e" (UID: "8d1baecf-2b7e-418b-8c64-95b6551f365e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.253582 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "swift") pod "0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" (UID: "0a5bfb32-7eae-4b04-9aee-d0873f0c93b9"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.253798 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "swift") pod "e6ace0ac-a7c8-4413-90ee-53d6bf699eef" (UID: "e6ace0ac-a7c8-4413-90ee-53d6bf699eef"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.253863 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-kube-api-access-8r45b" (OuterVolumeSpecName: "kube-api-access-8r45b") pod "8d1baecf-2b7e-418b-8c64-95b6551f365e" (UID: "8d1baecf-2b7e-418b-8c64-95b6551f365e"). InnerVolumeSpecName "kube-api-access-8r45b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.254087 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-kube-api-access-n9hz8" (OuterVolumeSpecName: "kube-api-access-n9hz8") pod "e6ace0ac-a7c8-4413-90ee-53d6bf699eef" (UID: "e6ace0ac-a7c8-4413-90ee-53d6bf699eef"). InnerVolumeSpecName "kube-api-access-n9hz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.254273 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "e6ace0ac-a7c8-4413-90ee-53d6bf699eef" (UID: "e6ace0ac-a7c8-4413-90ee-53d6bf699eef"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.254407 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-kube-api-access-sp55l" (OuterVolumeSpecName: "kube-api-access-sp55l") pod "0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" (UID: "0a5bfb32-7eae-4b04-9aee-d0873f0c93b9"). InnerVolumeSpecName "kube-api-access-sp55l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.254512 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" (UID: "0a5bfb32-7eae-4b04-9aee-d0873f0c93b9"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.254558 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d1baecf-2b7e-418b-8c64-95b6551f365e-cache" (OuterVolumeSpecName: "cache") pod "8d1baecf-2b7e-418b-8c64-95b6551f365e" (UID: "8d1baecf-2b7e-418b-8c64-95b6551f365e"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.256314 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "swift") pod "8d1baecf-2b7e-418b-8c64-95b6551f365e" (UID: "8d1baecf-2b7e-418b-8c64-95b6551f365e"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.347127 4763 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-cache\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.347205 4763 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.347222 4763 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-lock\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.347233 4763 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8d1baecf-2b7e-418b-8c64-95b6551f365e-lock\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.347247 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp55l\" (UniqueName: \"kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-kube-api-access-sp55l\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.347268 4763 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.347280 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.347292 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.347302 4763 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8d1baecf-2b7e-418b-8c64-95b6551f365e-cache\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.347313 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8r45b\" (UniqueName: \"kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-kube-api-access-8r45b\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.347324 4763 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-lock\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.347333 4763 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-cache\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.347343 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9hz8\" (UniqueName: \"kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-kube-api-access-n9hz8\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.347360 4763 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.347371 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.360623 4763 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.366560 4763 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.370314 4763 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.448361 4763 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.448394 4763 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.448406 4763 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.640110 4763 generic.go:334] "Generic (PLEG): container finished" podID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerID="c8c0e78ca1151a94155419bd0418b77c66fc386384f22eb1e0e7dc241590a78a" exitCode=137 Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.640209 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerDied","Data":"c8c0e78ca1151a94155419bd0418b77c66fc386384f22eb1e0e7dc241590a78a"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.640349 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.640479 4763 scope.go:117] "RemoveContainer" containerID="c8c0e78ca1151a94155419bd0418b77c66fc386384f22eb1e0e7dc241590a78a" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.640464 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerDied","Data":"1d4cf1913d51894066c90a63ebfe91dd9186021fe6b288d04eb4138560d222cd"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.653434 4763 generic.go:334] "Generic (PLEG): container finished" podID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerID="bce9cd376705065c5f2f54f669ef4c9cccbd07ea487a0242cf40f637878b3c9e" exitCode=137 Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.653546 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerDied","Data":"bce9cd376705065c5f2f54f669ef4c9cccbd07ea487a0242cf40f637878b3c9e"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.653607 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerDied","Data":"54e9716be29a1a3e98b1c62af30cb48d8d18ed8bc33c1e831e29d90d1bbee6be"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.653626 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92ad90a8f7a31ed2cf85f0757c71ef34da8f0c359347e86a4bcef54ce85516a6"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.653643 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fbf72198234fda54505fa079d75aa79a7acaa34fd24b31ddd293f0aae93e0c93"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.653651 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9a8de8d27e0778796c6a4baff03a1d1d922e9ae78f97728f4ae4aaf7fa341563"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.653659 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"86c85c15b961f6795b6b5dc674ffc310871f76d3dbb1eb7225326dc937e30e64"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.653667 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1351528e398244f79dc02492c56eb7c0b5cf97f5af4ab495fdf34673f2be4e3a"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.653677 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6bbf9c9a63d33d2c6b6e0178b1ade00637993606e119582492e66b6a24013ab8"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.653686 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"710120310f88e5c722eacaf49bc091b604c9117b11eb65ce63fd698be02b6699"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.653711 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99ceb2ba2010e4648343e1238ccdc00c944820acf210df397fc1aa1ebd073a62"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.653719 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0c8b0727815b22780ef52e9340937734001c60d592a0efa293171a6a99f631b8"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.654566 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669685 4763 generic.go:334] "Generic (PLEG): container finished" podID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerID="36471f31ca52c785e38c6b2b66ccd5857f6ef478d9ab8974c38189fbf0e27a7c" exitCode=137 Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669814 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerDied","Data":"36471f31ca52c785e38c6b2b66ccd5857f6ef478d9ab8974c38189fbf0e27a7c"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669843 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"36471f31ca52c785e38c6b2b66ccd5857f6ef478d9ab8974c38189fbf0e27a7c"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669858 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a193fbbc6d4c1b876140e857534a3e3476054468d680fb5032ef95fc43449eec"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669865 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa4453d984b8623efdeb6c14caeec9684c9789a6bbf3f070fda2ae53d211bc67"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669870 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9716359f4da3a71b9390a156a58d83e5742038cd7c70d2aaa6dddd57c4c7402f"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669877 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4007498dac8fd3b8123d95c0037cc87ebfb6676c6920014087622d824443355a"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669884 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"811bfef1f45e1c8bfb983ccdb9950299ee77652d3e6b07f11a1f38dcaa006989"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669890 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"433775c4538892b2b06557027ca728e6b8b86916941810cde9bf0aaa7cec78dd"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669896 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"18e785ee82660570c7a3c1f8a825fe58e5562959092c137ca7e8ad12f67b2cdf"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669902 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5de93fda3e4eeec09c3e0cab0c53f8bd9bc5a576f08e2f6a3e358bb501d0aee7"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669908 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"46bb08319fc51c2ac2e298b3d88809c97e2206ce13b2762bb97ee19fa37761d9"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669916 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d21b593c360d355cf6f976e7a33dd5c9a7af1da589078440cac056c5b3195552"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669922 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"539252f99bc3e6264ace92eb1a171fa552e64dc1e5ab28a67e1806e3665008b7"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669928 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8743be8babd948dd0c5cadcbb327888a64b81d91784dcd88e4edc80760703747"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669933 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6436423e21963e2b944d7661270d579b854013f8d01c88fd036a9b4a15a61846"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669939 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a92b2bd4fc8d6dca2fcd21333d8ea9dc1bea550c48e4c4023603133141572ad"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669955 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerDied","Data":"17d6e2ea7e01a1661ce12761f2f04f8c25d3d7cfb02f9b9ef0ef2888b0bc4824"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669968 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"36471f31ca52c785e38c6b2b66ccd5857f6ef478d9ab8974c38189fbf0e27a7c"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669975 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a193fbbc6d4c1b876140e857534a3e3476054468d680fb5032ef95fc43449eec"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669981 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa4453d984b8623efdeb6c14caeec9684c9789a6bbf3f070fda2ae53d211bc67"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669988 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9716359f4da3a71b9390a156a58d83e5742038cd7c70d2aaa6dddd57c4c7402f"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669995 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4007498dac8fd3b8123d95c0037cc87ebfb6676c6920014087622d824443355a"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.670001 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"811bfef1f45e1c8bfb983ccdb9950299ee77652d3e6b07f11a1f38dcaa006989"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.670007 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"433775c4538892b2b06557027ca728e6b8b86916941810cde9bf0aaa7cec78dd"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.670014 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"18e785ee82660570c7a3c1f8a825fe58e5562959092c137ca7e8ad12f67b2cdf"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.670020 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5de93fda3e4eeec09c3e0cab0c53f8bd9bc5a576f08e2f6a3e358bb501d0aee7"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.670027 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"46bb08319fc51c2ac2e298b3d88809c97e2206ce13b2762bb97ee19fa37761d9"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.670033 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d21b593c360d355cf6f976e7a33dd5c9a7af1da589078440cac056c5b3195552"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.670040 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"539252f99bc3e6264ace92eb1a171fa552e64dc1e5ab28a67e1806e3665008b7"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.670046 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8743be8babd948dd0c5cadcbb327888a64b81d91784dcd88e4edc80760703747"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.670053 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6436423e21963e2b944d7661270d579b854013f8d01c88fd036a9b4a15a61846"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.670061 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a92b2bd4fc8d6dca2fcd21333d8ea9dc1bea550c48e4c4023603133141572ad"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.670185 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.690173 4763 scope.go:117] "RemoveContainer" containerID="a2e002a6a1e3984c24d1cacf4ede42929c80f438b7e4fd3d25cc84706f472d24" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.696924 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.705858 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.717962 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.724089 4763 scope.go:117] "RemoveContainer" containerID="1200ff3e3aca2458c95ff5ac8deaa7609c141f64452aa6fc569865a2a07ba594" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.725538 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.740665 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.745900 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.749738 4763 scope.go:117] "RemoveContainer" containerID="3fc180b723e4c8f13af50f16bf6dd031dc4b30e6837b659d53f2126db991cdef" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.781517 4763 scope.go:117] "RemoveContainer" containerID="73ce9e2e7676ac3fba63ec19a18765b3c9e2f7127b13b82b0a8da016af057bd4" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.809045 4763 scope.go:117] "RemoveContainer" containerID="4125ef4f4111a2faea3497473d4e28a723f901956d45be9b6df5be593bcb094a" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.830539 4763 scope.go:117] "RemoveContainer" containerID="1edb0b7f2c7c7855ec59a3e46c12b4d9f8f2128670389c78068047481b466406" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.850196 4763 scope.go:117] "RemoveContainer" containerID="afce4d6edbdd41898c2d05bcfd465a50bf7fbe4f924d1536d022f9b7bde6cc4a" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.874060 4763 scope.go:117] "RemoveContainer" containerID="f10d79c1c2a03423d0f8e9bea742ee0f71b0de111c6c2b0530f79b1bb13590fe" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.895654 4763 scope.go:117] "RemoveContainer" containerID="483ce81d4f59c1977d7f239e53747fee35232b1656fc7602c1a77108638d9db2" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.921677 4763 scope.go:117] "RemoveContainer" containerID="400f9226fbc81108ff15e964994bc02c7044985a7b88bdcfc5b9ec38e0ec0971" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.938569 4763 scope.go:117] "RemoveContainer" containerID="cde84086c356956bf7c7e9e311a2b5c0dfd023d642e5c211c956f0606f3b6634" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.958082 4763 scope.go:117] "RemoveContainer" containerID="3343428eafc3c7fe632334cb15bc487e6a2cf486419894cbf84ef0039e8d4a0d" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.972110 4763 scope.go:117] "RemoveContainer" containerID="619d6d7dc92d20efbdc5a9e71ba3bb545609a50170ca45c2f90260cb8b92859d" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.999267 4763 scope.go:117] "RemoveContainer" containerID="9ce27e519b496e795e29d01f6e35ba0dbe740066395319ccd8400796bf4190cb" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.018875 4763 scope.go:117] "RemoveContainer" containerID="c8c0e78ca1151a94155419bd0418b77c66fc386384f22eb1e0e7dc241590a78a" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.019312 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8c0e78ca1151a94155419bd0418b77c66fc386384f22eb1e0e7dc241590a78a\": container with ID starting with c8c0e78ca1151a94155419bd0418b77c66fc386384f22eb1e0e7dc241590a78a not found: ID does not exist" containerID="c8c0e78ca1151a94155419bd0418b77c66fc386384f22eb1e0e7dc241590a78a" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.019354 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8c0e78ca1151a94155419bd0418b77c66fc386384f22eb1e0e7dc241590a78a"} err="failed to get container status \"c8c0e78ca1151a94155419bd0418b77c66fc386384f22eb1e0e7dc241590a78a\": rpc error: code = NotFound desc = could not find container \"c8c0e78ca1151a94155419bd0418b77c66fc386384f22eb1e0e7dc241590a78a\": container with ID starting with c8c0e78ca1151a94155419bd0418b77c66fc386384f22eb1e0e7dc241590a78a not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.019380 4763 scope.go:117] "RemoveContainer" containerID="a2e002a6a1e3984c24d1cacf4ede42929c80f438b7e4fd3d25cc84706f472d24" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.019773 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2e002a6a1e3984c24d1cacf4ede42929c80f438b7e4fd3d25cc84706f472d24\": container with ID starting with a2e002a6a1e3984c24d1cacf4ede42929c80f438b7e4fd3d25cc84706f472d24 not found: ID does not exist" containerID="a2e002a6a1e3984c24d1cacf4ede42929c80f438b7e4fd3d25cc84706f472d24" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.019803 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2e002a6a1e3984c24d1cacf4ede42929c80f438b7e4fd3d25cc84706f472d24"} err="failed to get container status \"a2e002a6a1e3984c24d1cacf4ede42929c80f438b7e4fd3d25cc84706f472d24\": rpc error: code = NotFound desc = could not find container \"a2e002a6a1e3984c24d1cacf4ede42929c80f438b7e4fd3d25cc84706f472d24\": container with ID starting with a2e002a6a1e3984c24d1cacf4ede42929c80f438b7e4fd3d25cc84706f472d24 not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.019821 4763 scope.go:117] "RemoveContainer" containerID="1200ff3e3aca2458c95ff5ac8deaa7609c141f64452aa6fc569865a2a07ba594" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.020087 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1200ff3e3aca2458c95ff5ac8deaa7609c141f64452aa6fc569865a2a07ba594\": container with ID starting with 1200ff3e3aca2458c95ff5ac8deaa7609c141f64452aa6fc569865a2a07ba594 not found: ID does not exist" containerID="1200ff3e3aca2458c95ff5ac8deaa7609c141f64452aa6fc569865a2a07ba594" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.020128 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1200ff3e3aca2458c95ff5ac8deaa7609c141f64452aa6fc569865a2a07ba594"} err="failed to get container status \"1200ff3e3aca2458c95ff5ac8deaa7609c141f64452aa6fc569865a2a07ba594\": rpc error: code = NotFound desc = could not find container \"1200ff3e3aca2458c95ff5ac8deaa7609c141f64452aa6fc569865a2a07ba594\": container with ID starting with 1200ff3e3aca2458c95ff5ac8deaa7609c141f64452aa6fc569865a2a07ba594 not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.020157 4763 scope.go:117] "RemoveContainer" containerID="3fc180b723e4c8f13af50f16bf6dd031dc4b30e6837b659d53f2126db991cdef" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.020441 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fc180b723e4c8f13af50f16bf6dd031dc4b30e6837b659d53f2126db991cdef\": container with ID starting with 3fc180b723e4c8f13af50f16bf6dd031dc4b30e6837b659d53f2126db991cdef not found: ID does not exist" containerID="3fc180b723e4c8f13af50f16bf6dd031dc4b30e6837b659d53f2126db991cdef" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.020466 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fc180b723e4c8f13af50f16bf6dd031dc4b30e6837b659d53f2126db991cdef"} err="failed to get container status \"3fc180b723e4c8f13af50f16bf6dd031dc4b30e6837b659d53f2126db991cdef\": rpc error: code = NotFound desc = could not find container \"3fc180b723e4c8f13af50f16bf6dd031dc4b30e6837b659d53f2126db991cdef\": container with ID starting with 3fc180b723e4c8f13af50f16bf6dd031dc4b30e6837b659d53f2126db991cdef not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.020486 4763 scope.go:117] "RemoveContainer" containerID="73ce9e2e7676ac3fba63ec19a18765b3c9e2f7127b13b82b0a8da016af057bd4" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.020855 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73ce9e2e7676ac3fba63ec19a18765b3c9e2f7127b13b82b0a8da016af057bd4\": container with ID starting with 73ce9e2e7676ac3fba63ec19a18765b3c9e2f7127b13b82b0a8da016af057bd4 not found: ID does not exist" containerID="73ce9e2e7676ac3fba63ec19a18765b3c9e2f7127b13b82b0a8da016af057bd4" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.020928 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73ce9e2e7676ac3fba63ec19a18765b3c9e2f7127b13b82b0a8da016af057bd4"} err="failed to get container status \"73ce9e2e7676ac3fba63ec19a18765b3c9e2f7127b13b82b0a8da016af057bd4\": rpc error: code = NotFound desc = could not find container \"73ce9e2e7676ac3fba63ec19a18765b3c9e2f7127b13b82b0a8da016af057bd4\": container with ID starting with 73ce9e2e7676ac3fba63ec19a18765b3c9e2f7127b13b82b0a8da016af057bd4 not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.020970 4763 scope.go:117] "RemoveContainer" containerID="4125ef4f4111a2faea3497473d4e28a723f901956d45be9b6df5be593bcb094a" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.021307 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4125ef4f4111a2faea3497473d4e28a723f901956d45be9b6df5be593bcb094a\": container with ID starting with 4125ef4f4111a2faea3497473d4e28a723f901956d45be9b6df5be593bcb094a not found: ID does not exist" containerID="4125ef4f4111a2faea3497473d4e28a723f901956d45be9b6df5be593bcb094a" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.021349 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4125ef4f4111a2faea3497473d4e28a723f901956d45be9b6df5be593bcb094a"} err="failed to get container status \"4125ef4f4111a2faea3497473d4e28a723f901956d45be9b6df5be593bcb094a\": rpc error: code = NotFound desc = could not find container \"4125ef4f4111a2faea3497473d4e28a723f901956d45be9b6df5be593bcb094a\": container with ID starting with 4125ef4f4111a2faea3497473d4e28a723f901956d45be9b6df5be593bcb094a not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.021375 4763 scope.go:117] "RemoveContainer" containerID="1edb0b7f2c7c7855ec59a3e46c12b4d9f8f2128670389c78068047481b466406" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.021627 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1edb0b7f2c7c7855ec59a3e46c12b4d9f8f2128670389c78068047481b466406\": container with ID starting with 1edb0b7f2c7c7855ec59a3e46c12b4d9f8f2128670389c78068047481b466406 not found: ID does not exist" containerID="1edb0b7f2c7c7855ec59a3e46c12b4d9f8f2128670389c78068047481b466406" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.021660 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1edb0b7f2c7c7855ec59a3e46c12b4d9f8f2128670389c78068047481b466406"} err="failed to get container status \"1edb0b7f2c7c7855ec59a3e46c12b4d9f8f2128670389c78068047481b466406\": rpc error: code = NotFound desc = could not find container \"1edb0b7f2c7c7855ec59a3e46c12b4d9f8f2128670389c78068047481b466406\": container with ID starting with 1edb0b7f2c7c7855ec59a3e46c12b4d9f8f2128670389c78068047481b466406 not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.021686 4763 scope.go:117] "RemoveContainer" containerID="afce4d6edbdd41898c2d05bcfd465a50bf7fbe4f924d1536d022f9b7bde6cc4a" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.021973 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afce4d6edbdd41898c2d05bcfd465a50bf7fbe4f924d1536d022f9b7bde6cc4a\": container with ID starting with afce4d6edbdd41898c2d05bcfd465a50bf7fbe4f924d1536d022f9b7bde6cc4a not found: ID does not exist" containerID="afce4d6edbdd41898c2d05bcfd465a50bf7fbe4f924d1536d022f9b7bde6cc4a" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.021998 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afce4d6edbdd41898c2d05bcfd465a50bf7fbe4f924d1536d022f9b7bde6cc4a"} err="failed to get container status \"afce4d6edbdd41898c2d05bcfd465a50bf7fbe4f924d1536d022f9b7bde6cc4a\": rpc error: code = NotFound desc = could not find container \"afce4d6edbdd41898c2d05bcfd465a50bf7fbe4f924d1536d022f9b7bde6cc4a\": container with ID starting with afce4d6edbdd41898c2d05bcfd465a50bf7fbe4f924d1536d022f9b7bde6cc4a not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.022018 4763 scope.go:117] "RemoveContainer" containerID="f10d79c1c2a03423d0f8e9bea742ee0f71b0de111c6c2b0530f79b1bb13590fe" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.022287 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f10d79c1c2a03423d0f8e9bea742ee0f71b0de111c6c2b0530f79b1bb13590fe\": container with ID starting with f10d79c1c2a03423d0f8e9bea742ee0f71b0de111c6c2b0530f79b1bb13590fe not found: ID does not exist" containerID="f10d79c1c2a03423d0f8e9bea742ee0f71b0de111c6c2b0530f79b1bb13590fe" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.022320 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f10d79c1c2a03423d0f8e9bea742ee0f71b0de111c6c2b0530f79b1bb13590fe"} err="failed to get container status \"f10d79c1c2a03423d0f8e9bea742ee0f71b0de111c6c2b0530f79b1bb13590fe\": rpc error: code = NotFound desc = could not find container \"f10d79c1c2a03423d0f8e9bea742ee0f71b0de111c6c2b0530f79b1bb13590fe\": container with ID starting with f10d79c1c2a03423d0f8e9bea742ee0f71b0de111c6c2b0530f79b1bb13590fe not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.022343 4763 scope.go:117] "RemoveContainer" containerID="483ce81d4f59c1977d7f239e53747fee35232b1656fc7602c1a77108638d9db2" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.022605 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"483ce81d4f59c1977d7f239e53747fee35232b1656fc7602c1a77108638d9db2\": container with ID starting with 483ce81d4f59c1977d7f239e53747fee35232b1656fc7602c1a77108638d9db2 not found: ID does not exist" containerID="483ce81d4f59c1977d7f239e53747fee35232b1656fc7602c1a77108638d9db2" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.022629 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"483ce81d4f59c1977d7f239e53747fee35232b1656fc7602c1a77108638d9db2"} err="failed to get container status \"483ce81d4f59c1977d7f239e53747fee35232b1656fc7602c1a77108638d9db2\": rpc error: code = NotFound desc = could not find container \"483ce81d4f59c1977d7f239e53747fee35232b1656fc7602c1a77108638d9db2\": container with ID starting with 483ce81d4f59c1977d7f239e53747fee35232b1656fc7602c1a77108638d9db2 not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.022650 4763 scope.go:117] "RemoveContainer" containerID="400f9226fbc81108ff15e964994bc02c7044985a7b88bdcfc5b9ec38e0ec0971" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.022939 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"400f9226fbc81108ff15e964994bc02c7044985a7b88bdcfc5b9ec38e0ec0971\": container with ID starting with 400f9226fbc81108ff15e964994bc02c7044985a7b88bdcfc5b9ec38e0ec0971 not found: ID does not exist" containerID="400f9226fbc81108ff15e964994bc02c7044985a7b88bdcfc5b9ec38e0ec0971" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.022969 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"400f9226fbc81108ff15e964994bc02c7044985a7b88bdcfc5b9ec38e0ec0971"} err="failed to get container status \"400f9226fbc81108ff15e964994bc02c7044985a7b88bdcfc5b9ec38e0ec0971\": rpc error: code = NotFound desc = could not find container \"400f9226fbc81108ff15e964994bc02c7044985a7b88bdcfc5b9ec38e0ec0971\": container with ID starting with 400f9226fbc81108ff15e964994bc02c7044985a7b88bdcfc5b9ec38e0ec0971 not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.022988 4763 scope.go:117] "RemoveContainer" containerID="cde84086c356956bf7c7e9e311a2b5c0dfd023d642e5c211c956f0606f3b6634" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.023250 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cde84086c356956bf7c7e9e311a2b5c0dfd023d642e5c211c956f0606f3b6634\": container with ID starting with cde84086c356956bf7c7e9e311a2b5c0dfd023d642e5c211c956f0606f3b6634 not found: ID does not exist" containerID="cde84086c356956bf7c7e9e311a2b5c0dfd023d642e5c211c956f0606f3b6634" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.023272 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cde84086c356956bf7c7e9e311a2b5c0dfd023d642e5c211c956f0606f3b6634"} err="failed to get container status \"cde84086c356956bf7c7e9e311a2b5c0dfd023d642e5c211c956f0606f3b6634\": rpc error: code = NotFound desc = could not find container \"cde84086c356956bf7c7e9e311a2b5c0dfd023d642e5c211c956f0606f3b6634\": container with ID starting with cde84086c356956bf7c7e9e311a2b5c0dfd023d642e5c211c956f0606f3b6634 not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.023288 4763 scope.go:117] "RemoveContainer" containerID="3343428eafc3c7fe632334cb15bc487e6a2cf486419894cbf84ef0039e8d4a0d" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.023476 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3343428eafc3c7fe632334cb15bc487e6a2cf486419894cbf84ef0039e8d4a0d\": container with ID starting with 3343428eafc3c7fe632334cb15bc487e6a2cf486419894cbf84ef0039e8d4a0d not found: ID does not exist" containerID="3343428eafc3c7fe632334cb15bc487e6a2cf486419894cbf84ef0039e8d4a0d" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.023504 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3343428eafc3c7fe632334cb15bc487e6a2cf486419894cbf84ef0039e8d4a0d"} err="failed to get container status \"3343428eafc3c7fe632334cb15bc487e6a2cf486419894cbf84ef0039e8d4a0d\": rpc error: code = NotFound desc = could not find container \"3343428eafc3c7fe632334cb15bc487e6a2cf486419894cbf84ef0039e8d4a0d\": container with ID starting with 3343428eafc3c7fe632334cb15bc487e6a2cf486419894cbf84ef0039e8d4a0d not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.023523 4763 scope.go:117] "RemoveContainer" containerID="619d6d7dc92d20efbdc5a9e71ba3bb545609a50170ca45c2f90260cb8b92859d" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.023911 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"619d6d7dc92d20efbdc5a9e71ba3bb545609a50170ca45c2f90260cb8b92859d\": container with ID starting with 619d6d7dc92d20efbdc5a9e71ba3bb545609a50170ca45c2f90260cb8b92859d not found: ID does not exist" containerID="619d6d7dc92d20efbdc5a9e71ba3bb545609a50170ca45c2f90260cb8b92859d" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.023930 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"619d6d7dc92d20efbdc5a9e71ba3bb545609a50170ca45c2f90260cb8b92859d"} err="failed to get container status \"619d6d7dc92d20efbdc5a9e71ba3bb545609a50170ca45c2f90260cb8b92859d\": rpc error: code = NotFound desc = could not find container \"619d6d7dc92d20efbdc5a9e71ba3bb545609a50170ca45c2f90260cb8b92859d\": container with ID starting with 619d6d7dc92d20efbdc5a9e71ba3bb545609a50170ca45c2f90260cb8b92859d not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.023943 4763 scope.go:117] "RemoveContainer" containerID="9ce27e519b496e795e29d01f6e35ba0dbe740066395319ccd8400796bf4190cb" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.024247 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ce27e519b496e795e29d01f6e35ba0dbe740066395319ccd8400796bf4190cb\": container with ID starting with 9ce27e519b496e795e29d01f6e35ba0dbe740066395319ccd8400796bf4190cb not found: ID does not exist" containerID="9ce27e519b496e795e29d01f6e35ba0dbe740066395319ccd8400796bf4190cb" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.024274 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ce27e519b496e795e29d01f6e35ba0dbe740066395319ccd8400796bf4190cb"} err="failed to get container status \"9ce27e519b496e795e29d01f6e35ba0dbe740066395319ccd8400796bf4190cb\": rpc error: code = NotFound desc = could not find container \"9ce27e519b496e795e29d01f6e35ba0dbe740066395319ccd8400796bf4190cb\": container with ID starting with 9ce27e519b496e795e29d01f6e35ba0dbe740066395319ccd8400796bf4190cb not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.024291 4763 scope.go:117] "RemoveContainer" containerID="bce9cd376705065c5f2f54f669ef4c9cccbd07ea487a0242cf40f637878b3c9e" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.044772 4763 scope.go:117] "RemoveContainer" containerID="ec009865c52a8d1e41b5965f9b645455af51d67839966fb510adf5cdeeb90ab6" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.064387 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" path="/var/lib/kubelet/pods/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9/volumes" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.067137 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" path="/var/lib/kubelet/pods/8d1baecf-2b7e-418b-8c64-95b6551f365e/volumes" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.068839 4763 scope.go:117] "RemoveContainer" containerID="2939485c2d7229a2f4e693e47c132dadba16e2fdb364592f267d9a10abee7f3b" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.069118 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" path="/var/lib/kubelet/pods/e6ace0ac-a7c8-4413-90ee-53d6bf699eef/volumes" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.085957 4763 scope.go:117] "RemoveContainer" containerID="4992aa25b0e802e8ef4afebdda60e6de87c86a4603281c770436bd0619a4396b" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.102296 4763 scope.go:117] "RemoveContainer" containerID="98598fa6e6e265c08f4b896f2e536f3ceb4a0027cac1512dcf4f464c4d6a0e2d" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.120728 4763 scope.go:117] "RemoveContainer" containerID="be15a349543095a208d3733f5743e24532655dd5a6a6458457ba577f54adc832" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.137955 4763 scope.go:117] "RemoveContainer" containerID="92ad90a8f7a31ed2cf85f0757c71ef34da8f0c359347e86a4bcef54ce85516a6" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.156878 4763 scope.go:117] "RemoveContainer" containerID="fbf72198234fda54505fa079d75aa79a7acaa34fd24b31ddd293f0aae93e0c93" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.175787 4763 scope.go:117] "RemoveContainer" containerID="9a8de8d27e0778796c6a4baff03a1d1d922e9ae78f97728f4ae4aaf7fa341563" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.193305 4763 scope.go:117] "RemoveContainer" containerID="86c85c15b961f6795b6b5dc674ffc310871f76d3dbb1eb7225326dc937e30e64" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.218645 4763 scope.go:117] "RemoveContainer" containerID="1351528e398244f79dc02492c56eb7c0b5cf97f5af4ab495fdf34673f2be4e3a" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.236912 4763 scope.go:117] "RemoveContainer" containerID="6bbf9c9a63d33d2c6b6e0178b1ade00637993606e119582492e66b6a24013ab8" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.255424 4763 scope.go:117] "RemoveContainer" containerID="710120310f88e5c722eacaf49bc091b604c9117b11eb65ce63fd698be02b6699" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.278299 4763 scope.go:117] "RemoveContainer" containerID="99ceb2ba2010e4648343e1238ccdc00c944820acf210df397fc1aa1ebd073a62" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.297568 4763 scope.go:117] "RemoveContainer" containerID="0c8b0727815b22780ef52e9340937734001c60d592a0efa293171a6a99f631b8" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.327949 4763 scope.go:117] "RemoveContainer" containerID="bce9cd376705065c5f2f54f669ef4c9cccbd07ea487a0242cf40f637878b3c9e" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.328673 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bce9cd376705065c5f2f54f669ef4c9cccbd07ea487a0242cf40f637878b3c9e\": container with ID starting with bce9cd376705065c5f2f54f669ef4c9cccbd07ea487a0242cf40f637878b3c9e not found: ID does not exist" containerID="bce9cd376705065c5f2f54f669ef4c9cccbd07ea487a0242cf40f637878b3c9e" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.328728 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bce9cd376705065c5f2f54f669ef4c9cccbd07ea487a0242cf40f637878b3c9e"} err="failed to get container status \"bce9cd376705065c5f2f54f669ef4c9cccbd07ea487a0242cf40f637878b3c9e\": rpc error: code = NotFound desc = could not find container \"bce9cd376705065c5f2f54f669ef4c9cccbd07ea487a0242cf40f637878b3c9e\": container with ID starting with bce9cd376705065c5f2f54f669ef4c9cccbd07ea487a0242cf40f637878b3c9e not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.328748 4763 scope.go:117] "RemoveContainer" containerID="ec009865c52a8d1e41b5965f9b645455af51d67839966fb510adf5cdeeb90ab6" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.329092 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec009865c52a8d1e41b5965f9b645455af51d67839966fb510adf5cdeeb90ab6\": container with ID starting with ec009865c52a8d1e41b5965f9b645455af51d67839966fb510adf5cdeeb90ab6 not found: ID does not exist" containerID="ec009865c52a8d1e41b5965f9b645455af51d67839966fb510adf5cdeeb90ab6" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.329117 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec009865c52a8d1e41b5965f9b645455af51d67839966fb510adf5cdeeb90ab6"} err="failed to get container status \"ec009865c52a8d1e41b5965f9b645455af51d67839966fb510adf5cdeeb90ab6\": rpc error: code = NotFound desc = could not find container \"ec009865c52a8d1e41b5965f9b645455af51d67839966fb510adf5cdeeb90ab6\": container with ID starting with ec009865c52a8d1e41b5965f9b645455af51d67839966fb510adf5cdeeb90ab6 not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.329171 4763 scope.go:117] "RemoveContainer" containerID="2939485c2d7229a2f4e693e47c132dadba16e2fdb364592f267d9a10abee7f3b" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.329468 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2939485c2d7229a2f4e693e47c132dadba16e2fdb364592f267d9a10abee7f3b\": container with ID starting with 2939485c2d7229a2f4e693e47c132dadba16e2fdb364592f267d9a10abee7f3b not found: ID does not exist" containerID="2939485c2d7229a2f4e693e47c132dadba16e2fdb364592f267d9a10abee7f3b" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.329491 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2939485c2d7229a2f4e693e47c132dadba16e2fdb364592f267d9a10abee7f3b"} err="failed to get container status \"2939485c2d7229a2f4e693e47c132dadba16e2fdb364592f267d9a10abee7f3b\": rpc error: code = NotFound desc = could not find container \"2939485c2d7229a2f4e693e47c132dadba16e2fdb364592f267d9a10abee7f3b\": container with ID starting with 2939485c2d7229a2f4e693e47c132dadba16e2fdb364592f267d9a10abee7f3b not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.329505 4763 scope.go:117] "RemoveContainer" containerID="4992aa25b0e802e8ef4afebdda60e6de87c86a4603281c770436bd0619a4396b" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.329811 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4992aa25b0e802e8ef4afebdda60e6de87c86a4603281c770436bd0619a4396b\": container with ID starting with 4992aa25b0e802e8ef4afebdda60e6de87c86a4603281c770436bd0619a4396b not found: ID does not exist" containerID="4992aa25b0e802e8ef4afebdda60e6de87c86a4603281c770436bd0619a4396b" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.329838 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4992aa25b0e802e8ef4afebdda60e6de87c86a4603281c770436bd0619a4396b"} err="failed to get container status \"4992aa25b0e802e8ef4afebdda60e6de87c86a4603281c770436bd0619a4396b\": rpc error: code = NotFound desc = could not find container \"4992aa25b0e802e8ef4afebdda60e6de87c86a4603281c770436bd0619a4396b\": container with ID starting with 4992aa25b0e802e8ef4afebdda60e6de87c86a4603281c770436bd0619a4396b not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.329873 4763 scope.go:117] "RemoveContainer" containerID="98598fa6e6e265c08f4b896f2e536f3ceb4a0027cac1512dcf4f464c4d6a0e2d" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.330109 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98598fa6e6e265c08f4b896f2e536f3ceb4a0027cac1512dcf4f464c4d6a0e2d\": container with ID starting with 98598fa6e6e265c08f4b896f2e536f3ceb4a0027cac1512dcf4f464c4d6a0e2d not found: ID does not exist" containerID="98598fa6e6e265c08f4b896f2e536f3ceb4a0027cac1512dcf4f464c4d6a0e2d" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.330174 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98598fa6e6e265c08f4b896f2e536f3ceb4a0027cac1512dcf4f464c4d6a0e2d"} err="failed to get container status \"98598fa6e6e265c08f4b896f2e536f3ceb4a0027cac1512dcf4f464c4d6a0e2d\": rpc error: code = NotFound desc = could not find container \"98598fa6e6e265c08f4b896f2e536f3ceb4a0027cac1512dcf4f464c4d6a0e2d\": container with ID starting with 98598fa6e6e265c08f4b896f2e536f3ceb4a0027cac1512dcf4f464c4d6a0e2d not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.330219 4763 scope.go:117] "RemoveContainer" containerID="be15a349543095a208d3733f5743e24532655dd5a6a6458457ba577f54adc832" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.330491 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be15a349543095a208d3733f5743e24532655dd5a6a6458457ba577f54adc832\": container with ID starting with be15a349543095a208d3733f5743e24532655dd5a6a6458457ba577f54adc832 not found: ID does not exist" containerID="be15a349543095a208d3733f5743e24532655dd5a6a6458457ba577f54adc832" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.330516 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be15a349543095a208d3733f5743e24532655dd5a6a6458457ba577f54adc832"} err="failed to get container status \"be15a349543095a208d3733f5743e24532655dd5a6a6458457ba577f54adc832\": rpc error: code = NotFound desc = could not find container \"be15a349543095a208d3733f5743e24532655dd5a6a6458457ba577f54adc832\": container with ID starting with be15a349543095a208d3733f5743e24532655dd5a6a6458457ba577f54adc832 not found: ID does not exist" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.590444 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591163 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-expirer" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591174 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-expirer" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591186 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="account-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591192 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="account-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591198 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="account-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591204 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="account-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591214 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a76b9c98-a93e-4935-947c-9ecf237b7a97" containerName="proxy-httpd" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591221 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="a76b9c98-a93e-4935-947c-9ecf237b7a97" containerName="proxy-httpd" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591229 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dec991f-9426-402a-8f83-8547257d2b30" containerName="swift-ring-rebalance" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591235 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dec991f-9426-402a-8f83-8547257d2b30" containerName="swift-ring-rebalance" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591245 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="account-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591251 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="account-server" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591258 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-updater" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591263 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-updater" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591274 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="container-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591279 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="container-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591289 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="container-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591295 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="container-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591302 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a76b9c98-a93e-4935-947c-9ecf237b7a97" containerName="proxy-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591307 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="a76b9c98-a93e-4935-947c-9ecf237b7a97" containerName="proxy-server" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591316 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591321 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-server" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591330 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591337 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591343 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="container-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591348 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="container-server" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591357 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591363 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591370 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="container-updater" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591375 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="container-updater" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591385 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="rsync" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591390 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="rsync" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591400 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="account-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591406 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="account-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591415 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591422 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591431 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="rsync" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591436 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="rsync" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591444 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="container-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591450 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="container-server" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591458 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="container-updater" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591463 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="container-updater" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591469 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591475 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591484 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="account-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591489 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="account-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591495 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="account-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591500 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="account-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591507 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="container-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591512 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="container-server" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591518 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="account-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591523 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="account-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591531 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="account-reaper" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591537 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="account-reaper" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591545 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-expirer" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591550 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-expirer" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591559 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-expirer" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591564 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-expirer" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591572 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-updater" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591580 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-updater" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591589 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="swift-recon-cron" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591595 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="swift-recon-cron" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591620 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591626 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-server" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591635 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="container-updater" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591640 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="container-updater" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591646 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591651 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-server" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591660 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="container-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591665 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="container-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591672 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="account-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591677 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="account-server" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591685 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="rsync" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591703 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="rsync" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591714 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="container-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591720 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="container-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591727 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-updater" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591733 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-updater" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591741 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="container-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591746 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="container-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591755 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="swift-recon-cron" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591760 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="swift-recon-cron" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591767 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591772 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591781 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="swift-recon-cron" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591786 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="swift-recon-cron" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591798 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="account-reaper" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591811 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="account-reaper" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591820 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="account-reaper" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591825 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="account-reaper" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591834 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591840 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591848 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="account-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591854 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="account-server" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591861 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="container-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591867 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="container-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591977 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="account-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591986 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="a76b9c98-a93e-4935-947c-9ecf237b7a97" containerName="proxy-httpd" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591993 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592000 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dec991f-9426-402a-8f83-8547257d2b30" containerName="swift-ring-rebalance" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592007 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="container-updater" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592015 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-updater" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592022 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592032 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="account-reaper" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592039 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="container-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592046 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="account-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592055 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="swift-recon-cron" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592062 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="a76b9c98-a93e-4935-947c-9ecf237b7a97" containerName="proxy-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592068 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="rsync" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592077 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592084 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="rsync" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592091 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="account-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592096 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="account-reaper" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592104 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="container-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592111 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="swift-recon-cron" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592118 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="container-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592126 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="account-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592133 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="account-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592141 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="container-updater" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592148 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-updater" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592155 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="account-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592160 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="container-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592167 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592176 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="container-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592185 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592193 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="account-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592200 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592207 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="container-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592215 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-expirer" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592223 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="container-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592232 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-updater" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592239 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592245 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="container-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592252 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="container-updater" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592259 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-expirer" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592266 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592272 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="account-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592279 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="rsync" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592287 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="account-reaper" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592294 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="container-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592300 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-expirer" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592308 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="swift-recon-cron" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592316 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="account-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592324 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.595652 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.598874 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-files" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.598921 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-storage-config-data" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.598951 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-swift-dockercfg-ngwjv" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.599243 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-conf" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.630014 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.722649 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.722710 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7390eb43-2a86-4ad9-b504-6fca814daf1c-cache\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.722736 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7390eb43-2a86-4ad9-b504-6fca814daf1c-lock\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.722765 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.722789 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvzrw\" (UniqueName: \"kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-kube-api-access-bvzrw\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.824370 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.824428 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7390eb43-2a86-4ad9-b504-6fca814daf1c-cache\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.824451 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7390eb43-2a86-4ad9-b504-6fca814daf1c-lock\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.824491 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.824516 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvzrw\" (UniqueName: \"kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-kube-api-access-bvzrw\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.824732 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.824788 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.824742 4763 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") device mount path \"/mnt/openstack/pv05\"" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.824857 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift podName:7390eb43-2a86-4ad9-b504-6fca814daf1c nodeName:}" failed. No retries permitted until 2026-01-31 15:17:14.324836164 +0000 UTC m=+1354.079574517 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift") pod "swift-storage-0" (UID: "7390eb43-2a86-4ad9-b504-6fca814daf1c") : configmap "swift-ring-files" not found Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.824999 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7390eb43-2a86-4ad9-b504-6fca814daf1c-lock\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.825261 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7390eb43-2a86-4ad9-b504-6fca814daf1c-cache\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.846541 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.862211 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvzrw\" (UniqueName: \"kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-kube-api-access-bvzrw\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.876922 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-9d572"] Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.877820 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.880800 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.881079 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-proxy-config-data" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.881309 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.889890 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-9d572"] Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.913183 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-9d572"] Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.930546 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[dispersionconf etc-swift kube-api-access-rbs9q ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="swift-kuttl-tests/swift-ring-rebalance-9d572" podUID="0e216b50-34a4-4079-a8eb-2bd926eda934" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.026331 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0e216b50-34a4-4079-a8eb-2bd926eda934-etc-swift\") pod \"swift-ring-rebalance-9d572\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.026379 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0e216b50-34a4-4079-a8eb-2bd926eda934-dispersionconf\") pod \"swift-ring-rebalance-9d572\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.026434 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0e216b50-34a4-4079-a8eb-2bd926eda934-swiftconf\") pod \"swift-ring-rebalance-9d572\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.026457 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e216b50-34a4-4079-a8eb-2bd926eda934-scripts\") pod \"swift-ring-rebalance-9d572\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.026476 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0e216b50-34a4-4079-a8eb-2bd926eda934-ring-data-devices\") pod \"swift-ring-rebalance-9d572\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.026748 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbs9q\" (UniqueName: \"kubernetes.io/projected/0e216b50-34a4-4079-a8eb-2bd926eda934-kube-api-access-rbs9q\") pod \"swift-ring-rebalance-9d572\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.128364 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0e216b50-34a4-4079-a8eb-2bd926eda934-swiftconf\") pod \"swift-ring-rebalance-9d572\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.128417 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e216b50-34a4-4079-a8eb-2bd926eda934-scripts\") pod \"swift-ring-rebalance-9d572\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.128439 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0e216b50-34a4-4079-a8eb-2bd926eda934-ring-data-devices\") pod \"swift-ring-rebalance-9d572\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.128516 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbs9q\" (UniqueName: \"kubernetes.io/projected/0e216b50-34a4-4079-a8eb-2bd926eda934-kube-api-access-rbs9q\") pod \"swift-ring-rebalance-9d572\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.128550 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0e216b50-34a4-4079-a8eb-2bd926eda934-etc-swift\") pod \"swift-ring-rebalance-9d572\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.128569 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0e216b50-34a4-4079-a8eb-2bd926eda934-dispersionconf\") pod \"swift-ring-rebalance-9d572\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.129159 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0e216b50-34a4-4079-a8eb-2bd926eda934-etc-swift\") pod \"swift-ring-rebalance-9d572\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.129533 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0e216b50-34a4-4079-a8eb-2bd926eda934-ring-data-devices\") pod \"swift-ring-rebalance-9d572\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.129975 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e216b50-34a4-4079-a8eb-2bd926eda934-scripts\") pod \"swift-ring-rebalance-9d572\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.133212 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0e216b50-34a4-4079-a8eb-2bd926eda934-swiftconf\") pod \"swift-ring-rebalance-9d572\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.139209 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0e216b50-34a4-4079-a8eb-2bd926eda934-dispersionconf\") pod \"swift-ring-rebalance-9d572\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.151913 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbs9q\" (UniqueName: \"kubernetes.io/projected/0e216b50-34a4-4079-a8eb-2bd926eda934-kube-api-access-rbs9q\") pod \"swift-ring-rebalance-9d572\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.332347 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:14 crc kubenswrapper[4763]: E0131 15:17:14.332992 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:17:14 crc kubenswrapper[4763]: E0131 15:17:14.333034 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:17:14 crc kubenswrapper[4763]: E0131 15:17:14.333200 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift podName:7390eb43-2a86-4ad9-b504-6fca814daf1c nodeName:}" failed. No retries permitted until 2026-01-31 15:17:15.33313546 +0000 UTC m=+1355.087873793 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift") pod "swift-storage-0" (UID: "7390eb43-2a86-4ad9-b504-6fca814daf1c") : configmap "swift-ring-files" not found Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.487226 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-proxy-77c98d654c-29n65"] Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.489142 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.500192 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-77c98d654c-29n65"] Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.636565 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fssq6\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-kube-api-access-fssq6\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.636709 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.636785 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-run-httpd\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.636834 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-config-data\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.636898 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-log-httpd\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.726979 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.734134 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.737842 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-log-httpd\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.737940 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fssq6\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-kube-api-access-fssq6\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.737995 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.738032 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-run-httpd\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.738069 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-config-data\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:14 crc kubenswrapper[4763]: E0131 15:17:14.738186 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:17:14 crc kubenswrapper[4763]: E0131 15:17:14.738217 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-77c98d654c-29n65: configmap "swift-ring-files" not found Jan 31 15:17:14 crc kubenswrapper[4763]: E0131 15:17:14.738284 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift podName:c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc nodeName:}" failed. No retries permitted until 2026-01-31 15:17:15.238261303 +0000 UTC m=+1354.992999616 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift") pod "swift-proxy-77c98d654c-29n65" (UID: "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc") : configmap "swift-ring-files" not found Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.738421 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-log-httpd\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.738583 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-run-httpd\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.744889 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-config-data\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.754121 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fssq6\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-kube-api-access-fssq6\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.838890 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0e216b50-34a4-4079-a8eb-2bd926eda934-dispersionconf\") pod \"0e216b50-34a4-4079-a8eb-2bd926eda934\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.839005 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0e216b50-34a4-4079-a8eb-2bd926eda934-etc-swift\") pod \"0e216b50-34a4-4079-a8eb-2bd926eda934\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.839036 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e216b50-34a4-4079-a8eb-2bd926eda934-scripts\") pod \"0e216b50-34a4-4079-a8eb-2bd926eda934\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.839106 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbs9q\" (UniqueName: \"kubernetes.io/projected/0e216b50-34a4-4079-a8eb-2bd926eda934-kube-api-access-rbs9q\") pod \"0e216b50-34a4-4079-a8eb-2bd926eda934\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.839149 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0e216b50-34a4-4079-a8eb-2bd926eda934-ring-data-devices\") pod \"0e216b50-34a4-4079-a8eb-2bd926eda934\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.839193 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0e216b50-34a4-4079-a8eb-2bd926eda934-swiftconf\") pod \"0e216b50-34a4-4079-a8eb-2bd926eda934\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.839296 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e216b50-34a4-4079-a8eb-2bd926eda934-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "0e216b50-34a4-4079-a8eb-2bd926eda934" (UID: "0e216b50-34a4-4079-a8eb-2bd926eda934"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.839600 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e216b50-34a4-4079-a8eb-2bd926eda934-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "0e216b50-34a4-4079-a8eb-2bd926eda934" (UID: "0e216b50-34a4-4079-a8eb-2bd926eda934"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.840079 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e216b50-34a4-4079-a8eb-2bd926eda934-scripts" (OuterVolumeSpecName: "scripts") pod "0e216b50-34a4-4079-a8eb-2bd926eda934" (UID: "0e216b50-34a4-4079-a8eb-2bd926eda934"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.840570 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0e216b50-34a4-4079-a8eb-2bd926eda934-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.840609 4763 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e216b50-34a4-4079-a8eb-2bd926eda934-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.840631 4763 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0e216b50-34a4-4079-a8eb-2bd926eda934-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.842425 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e216b50-34a4-4079-a8eb-2bd926eda934-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "0e216b50-34a4-4079-a8eb-2bd926eda934" (UID: "0e216b50-34a4-4079-a8eb-2bd926eda934"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.843219 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e216b50-34a4-4079-a8eb-2bd926eda934-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "0e216b50-34a4-4079-a8eb-2bd926eda934" (UID: "0e216b50-34a4-4079-a8eb-2bd926eda934"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.846866 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e216b50-34a4-4079-a8eb-2bd926eda934-kube-api-access-rbs9q" (OuterVolumeSpecName: "kube-api-access-rbs9q") pod "0e216b50-34a4-4079-a8eb-2bd926eda934" (UID: "0e216b50-34a4-4079-a8eb-2bd926eda934"). InnerVolumeSpecName "kube-api-access-rbs9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.942237 4763 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0e216b50-34a4-4079-a8eb-2bd926eda934-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.942284 4763 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0e216b50-34a4-4079-a8eb-2bd926eda934-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.942303 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbs9q\" (UniqueName: \"kubernetes.io/projected/0e216b50-34a4-4079-a8eb-2bd926eda934-kube-api-access-rbs9q\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:15 crc kubenswrapper[4763]: I0131 15:17:15.248609 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:15 crc kubenswrapper[4763]: E0131 15:17:15.248832 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:17:15 crc kubenswrapper[4763]: E0131 15:17:15.248846 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-77c98d654c-29n65: configmap "swift-ring-files" not found Jan 31 15:17:15 crc kubenswrapper[4763]: E0131 15:17:15.248914 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift podName:c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc nodeName:}" failed. No retries permitted until 2026-01-31 15:17:16.24887358 +0000 UTC m=+1356.003611873 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift") pod "swift-proxy-77c98d654c-29n65" (UID: "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc") : configmap "swift-ring-files" not found Jan 31 15:17:15 crc kubenswrapper[4763]: I0131 15:17:15.349772 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:15 crc kubenswrapper[4763]: E0131 15:17:15.349992 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:17:15 crc kubenswrapper[4763]: E0131 15:17:15.350020 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:17:15 crc kubenswrapper[4763]: E0131 15:17:15.350089 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift podName:7390eb43-2a86-4ad9-b504-6fca814daf1c nodeName:}" failed. No retries permitted until 2026-01-31 15:17:17.350068232 +0000 UTC m=+1357.104806525 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift") pod "swift-storage-0" (UID: "7390eb43-2a86-4ad9-b504-6fca814daf1c") : configmap "swift-ring-files" not found Jan 31 15:17:15 crc kubenswrapper[4763]: I0131 15:17:15.734726 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:15 crc kubenswrapper[4763]: I0131 15:17:15.770091 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-9d572"] Jan 31 15:17:15 crc kubenswrapper[4763]: I0131 15:17:15.777231 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-9d572"] Jan 31 15:17:16 crc kubenswrapper[4763]: I0131 15:17:16.263499 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:16 crc kubenswrapper[4763]: E0131 15:17:16.263669 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:17:16 crc kubenswrapper[4763]: E0131 15:17:16.263685 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-77c98d654c-29n65: configmap "swift-ring-files" not found Jan 31 15:17:16 crc kubenswrapper[4763]: E0131 15:17:16.263761 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift podName:c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc nodeName:}" failed. No retries permitted until 2026-01-31 15:17:18.263739297 +0000 UTC m=+1358.018477600 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift") pod "swift-proxy-77c98d654c-29n65" (UID: "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc") : configmap "swift-ring-files" not found Jan 31 15:17:17 crc kubenswrapper[4763]: I0131 15:17:17.050856 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e216b50-34a4-4079-a8eb-2bd926eda934" path="/var/lib/kubelet/pods/0e216b50-34a4-4079-a8eb-2bd926eda934/volumes" Jan 31 15:17:17 crc kubenswrapper[4763]: I0131 15:17:17.380522 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:17 crc kubenswrapper[4763]: E0131 15:17:17.380849 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:17:17 crc kubenswrapper[4763]: E0131 15:17:17.380881 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:17:17 crc kubenswrapper[4763]: E0131 15:17:17.380957 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift podName:7390eb43-2a86-4ad9-b504-6fca814daf1c nodeName:}" failed. No retries permitted until 2026-01-31 15:17:21.380929024 +0000 UTC m=+1361.135667367 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift") pod "swift-storage-0" (UID: "7390eb43-2a86-4ad9-b504-6fca814daf1c") : configmap "swift-ring-files" not found Jan 31 15:17:18 crc kubenswrapper[4763]: I0131 15:17:18.296302 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:18 crc kubenswrapper[4763]: E0131 15:17:18.296643 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:17:18 crc kubenswrapper[4763]: E0131 15:17:18.296990 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-77c98d654c-29n65: configmap "swift-ring-files" not found Jan 31 15:17:18 crc kubenswrapper[4763]: E0131 15:17:18.297104 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift podName:c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc nodeName:}" failed. No retries permitted until 2026-01-31 15:17:22.297065075 +0000 UTC m=+1362.051803418 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift") pod "swift-proxy-77c98d654c-29n65" (UID: "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc") : configmap "swift-ring-files" not found Jan 31 15:17:21 crc kubenswrapper[4763]: I0131 15:17:21.454617 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:21 crc kubenswrapper[4763]: E0131 15:17:21.454877 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:17:21 crc kubenswrapper[4763]: E0131 15:17:21.455151 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:17:21 crc kubenswrapper[4763]: E0131 15:17:21.455210 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift podName:7390eb43-2a86-4ad9-b504-6fca814daf1c nodeName:}" failed. No retries permitted until 2026-01-31 15:17:29.455191934 +0000 UTC m=+1369.209930227 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift") pod "swift-storage-0" (UID: "7390eb43-2a86-4ad9-b504-6fca814daf1c") : configmap "swift-ring-files" not found Jan 31 15:17:22 crc kubenswrapper[4763]: I0131 15:17:22.368445 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:22 crc kubenswrapper[4763]: E0131 15:17:22.368883 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:17:22 crc kubenswrapper[4763]: E0131 15:17:22.368920 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-77c98d654c-29n65: configmap "swift-ring-files" not found Jan 31 15:17:22 crc kubenswrapper[4763]: E0131 15:17:22.368997 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift podName:c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc nodeName:}" failed. No retries permitted until 2026-01-31 15:17:30.368974523 +0000 UTC m=+1370.123712856 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift") pod "swift-proxy-77c98d654c-29n65" (UID: "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc") : configmap "swift-ring-files" not found Jan 31 15:17:29 crc kubenswrapper[4763]: I0131 15:17:29.481109 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:29 crc kubenswrapper[4763]: E0131 15:17:29.481419 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:17:29 crc kubenswrapper[4763]: E0131 15:17:29.481754 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:17:29 crc kubenswrapper[4763]: E0131 15:17:29.481823 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift podName:7390eb43-2a86-4ad9-b504-6fca814daf1c nodeName:}" failed. No retries permitted until 2026-01-31 15:17:45.48180527 +0000 UTC m=+1385.236543563 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift") pod "swift-storage-0" (UID: "7390eb43-2a86-4ad9-b504-6fca814daf1c") : configmap "swift-ring-files" not found Jan 31 15:17:30 crc kubenswrapper[4763]: I0131 15:17:30.392261 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:30 crc kubenswrapper[4763]: E0131 15:17:30.392591 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:17:30 crc kubenswrapper[4763]: E0131 15:17:30.392642 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-77c98d654c-29n65: configmap "swift-ring-files" not found Jan 31 15:17:30 crc kubenswrapper[4763]: E0131 15:17:30.392790 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift podName:c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc nodeName:}" failed. No retries permitted until 2026-01-31 15:17:46.392749963 +0000 UTC m=+1386.147488316 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift") pod "swift-proxy-77c98d654c-29n65" (UID: "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc") : configmap "swift-ring-files" not found Jan 31 15:17:45 crc kubenswrapper[4763]: I0131 15:17:45.486912 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:45 crc kubenswrapper[4763]: E0131 15:17:45.487187 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:17:45 crc kubenswrapper[4763]: E0131 15:17:45.487604 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:17:45 crc kubenswrapper[4763]: E0131 15:17:45.487673 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift podName:7390eb43-2a86-4ad9-b504-6fca814daf1c nodeName:}" failed. No retries permitted until 2026-01-31 15:18:17.487649084 +0000 UTC m=+1417.242387387 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift") pod "swift-storage-0" (UID: "7390eb43-2a86-4ad9-b504-6fca814daf1c") : configmap "swift-ring-files" not found Jan 31 15:17:46 crc kubenswrapper[4763]: I0131 15:17:46.399307 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:46 crc kubenswrapper[4763]: E0131 15:17:46.399589 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:17:46 crc kubenswrapper[4763]: E0131 15:17:46.399902 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-77c98d654c-29n65: configmap "swift-ring-files" not found Jan 31 15:17:46 crc kubenswrapper[4763]: E0131 15:17:46.400007 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift podName:c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc nodeName:}" failed. No retries permitted until 2026-01-31 15:18:18.399978105 +0000 UTC m=+1418.154716438 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift") pod "swift-proxy-77c98d654c-29n65" (UID: "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc") : configmap "swift-ring-files" not found Jan 31 15:18:08 crc kubenswrapper[4763]: I0131 15:18:08.069253 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fdvt6"] Jan 31 15:18:08 crc kubenswrapper[4763]: I0131 15:18:08.071421 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fdvt6" Jan 31 15:18:08 crc kubenswrapper[4763]: I0131 15:18:08.082415 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fdvt6"] Jan 31 15:18:08 crc kubenswrapper[4763]: I0131 15:18:08.138275 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bf14829-3173-4bb3-9696-9f721465d757-utilities\") pod \"redhat-marketplace-fdvt6\" (UID: \"5bf14829-3173-4bb3-9696-9f721465d757\") " pod="openshift-marketplace/redhat-marketplace-fdvt6" Jan 31 15:18:08 crc kubenswrapper[4763]: I0131 15:18:08.138461 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bf14829-3173-4bb3-9696-9f721465d757-catalog-content\") pod \"redhat-marketplace-fdvt6\" (UID: \"5bf14829-3173-4bb3-9696-9f721465d757\") " pod="openshift-marketplace/redhat-marketplace-fdvt6" Jan 31 15:18:08 crc kubenswrapper[4763]: I0131 15:18:08.138483 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdrbb\" (UniqueName: \"kubernetes.io/projected/5bf14829-3173-4bb3-9696-9f721465d757-kube-api-access-hdrbb\") pod \"redhat-marketplace-fdvt6\" (UID: \"5bf14829-3173-4bb3-9696-9f721465d757\") " pod="openshift-marketplace/redhat-marketplace-fdvt6" Jan 31 15:18:08 crc kubenswrapper[4763]: I0131 15:18:08.239552 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bf14829-3173-4bb3-9696-9f721465d757-catalog-content\") pod \"redhat-marketplace-fdvt6\" (UID: \"5bf14829-3173-4bb3-9696-9f721465d757\") " pod="openshift-marketplace/redhat-marketplace-fdvt6" Jan 31 15:18:08 crc kubenswrapper[4763]: I0131 15:18:08.239607 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdrbb\" (UniqueName: \"kubernetes.io/projected/5bf14829-3173-4bb3-9696-9f721465d757-kube-api-access-hdrbb\") pod \"redhat-marketplace-fdvt6\" (UID: \"5bf14829-3173-4bb3-9696-9f721465d757\") " pod="openshift-marketplace/redhat-marketplace-fdvt6" Jan 31 15:18:08 crc kubenswrapper[4763]: I0131 15:18:08.239672 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bf14829-3173-4bb3-9696-9f721465d757-utilities\") pod \"redhat-marketplace-fdvt6\" (UID: \"5bf14829-3173-4bb3-9696-9f721465d757\") " pod="openshift-marketplace/redhat-marketplace-fdvt6" Jan 31 15:18:08 crc kubenswrapper[4763]: I0131 15:18:08.240165 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bf14829-3173-4bb3-9696-9f721465d757-catalog-content\") pod \"redhat-marketplace-fdvt6\" (UID: \"5bf14829-3173-4bb3-9696-9f721465d757\") " pod="openshift-marketplace/redhat-marketplace-fdvt6" Jan 31 15:18:08 crc kubenswrapper[4763]: I0131 15:18:08.240288 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bf14829-3173-4bb3-9696-9f721465d757-utilities\") pod \"redhat-marketplace-fdvt6\" (UID: \"5bf14829-3173-4bb3-9696-9f721465d757\") " pod="openshift-marketplace/redhat-marketplace-fdvt6" Jan 31 15:18:08 crc kubenswrapper[4763]: I0131 15:18:08.261346 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdrbb\" (UniqueName: \"kubernetes.io/projected/5bf14829-3173-4bb3-9696-9f721465d757-kube-api-access-hdrbb\") pod \"redhat-marketplace-fdvt6\" (UID: \"5bf14829-3173-4bb3-9696-9f721465d757\") " pod="openshift-marketplace/redhat-marketplace-fdvt6" Jan 31 15:18:08 crc kubenswrapper[4763]: I0131 15:18:08.393213 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fdvt6" Jan 31 15:18:08 crc kubenswrapper[4763]: I0131 15:18:08.679797 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fdvt6"] Jan 31 15:18:09 crc kubenswrapper[4763]: I0131 15:18:09.219667 4763 generic.go:334] "Generic (PLEG): container finished" podID="5bf14829-3173-4bb3-9696-9f721465d757" containerID="a52e4ec71f76618f2bd45eaeadeda4c6ef82478d57289deeeeb19fe736c611d2" exitCode=0 Jan 31 15:18:09 crc kubenswrapper[4763]: I0131 15:18:09.219732 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fdvt6" event={"ID":"5bf14829-3173-4bb3-9696-9f721465d757","Type":"ContainerDied","Data":"a52e4ec71f76618f2bd45eaeadeda4c6ef82478d57289deeeeb19fe736c611d2"} Jan 31 15:18:09 crc kubenswrapper[4763]: I0131 15:18:09.219761 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fdvt6" event={"ID":"5bf14829-3173-4bb3-9696-9f721465d757","Type":"ContainerStarted","Data":"5cc5ef5e63ad76ca638a72e06d1076b5d187e70eab209b3913d347757ee3caf2"} Jan 31 15:18:09 crc kubenswrapper[4763]: I0131 15:18:09.221795 4763 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 15:18:10 crc kubenswrapper[4763]: I0131 15:18:10.227461 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fdvt6" event={"ID":"5bf14829-3173-4bb3-9696-9f721465d757","Type":"ContainerStarted","Data":"3c9f08651352eadf7e902ff1934fabd97c1c939a0cf09c6d4a4061e49fb297f2"} Jan 31 15:18:11 crc kubenswrapper[4763]: I0131 15:18:11.236210 4763 generic.go:334] "Generic (PLEG): container finished" podID="5bf14829-3173-4bb3-9696-9f721465d757" containerID="3c9f08651352eadf7e902ff1934fabd97c1c939a0cf09c6d4a4061e49fb297f2" exitCode=0 Jan 31 15:18:11 crc kubenswrapper[4763]: I0131 15:18:11.236285 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fdvt6" event={"ID":"5bf14829-3173-4bb3-9696-9f721465d757","Type":"ContainerDied","Data":"3c9f08651352eadf7e902ff1934fabd97c1c939a0cf09c6d4a4061e49fb297f2"} Jan 31 15:18:12 crc kubenswrapper[4763]: I0131 15:18:12.246235 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fdvt6" event={"ID":"5bf14829-3173-4bb3-9696-9f721465d757","Type":"ContainerStarted","Data":"53a71c2b15bce0cdce8c74ed72ce672d58c66bc19364680f17a65ab7856d47f0"} Jan 31 15:18:17 crc kubenswrapper[4763]: I0131 15:18:17.575437 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:18:17 crc kubenswrapper[4763]: E0131 15:18:17.575811 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:18:17 crc kubenswrapper[4763]: E0131 15:18:17.576183 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:18:17 crc kubenswrapper[4763]: E0131 15:18:17.576247 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift podName:7390eb43-2a86-4ad9-b504-6fca814daf1c nodeName:}" failed. No retries permitted until 2026-01-31 15:19:21.576227515 +0000 UTC m=+1481.330965808 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift") pod "swift-storage-0" (UID: "7390eb43-2a86-4ad9-b504-6fca814daf1c") : configmap "swift-ring-files" not found Jan 31 15:18:18 crc kubenswrapper[4763]: I0131 15:18:18.394264 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fdvt6" Jan 31 15:18:18 crc kubenswrapper[4763]: I0131 15:18:18.394356 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fdvt6" Jan 31 15:18:18 crc kubenswrapper[4763]: I0131 15:18:18.462881 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fdvt6" Jan 31 15:18:18 crc kubenswrapper[4763]: I0131 15:18:18.489734 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:18:18 crc kubenswrapper[4763]: E0131 15:18:18.489835 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:18:18 crc kubenswrapper[4763]: E0131 15:18:18.489867 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-77c98d654c-29n65: configmap "swift-ring-files" not found Jan 31 15:18:18 crc kubenswrapper[4763]: E0131 15:18:18.489927 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift podName:c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc nodeName:}" failed. No retries permitted until 2026-01-31 15:19:22.489908281 +0000 UTC m=+1482.244646574 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift") pod "swift-proxy-77c98d654c-29n65" (UID: "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc") : configmap "swift-ring-files" not found Jan 31 15:18:18 crc kubenswrapper[4763]: I0131 15:18:18.496453 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fdvt6" podStartSLOduration=8.067956899 podStartE2EDuration="10.496426505s" podCreationTimestamp="2026-01-31 15:18:08 +0000 UTC" firstStartedPulling="2026-01-31 15:18:09.221557405 +0000 UTC m=+1408.976295698" lastFinishedPulling="2026-01-31 15:18:11.650027011 +0000 UTC m=+1411.404765304" observedRunningTime="2026-01-31 15:18:12.270991853 +0000 UTC m=+1412.025730146" watchObservedRunningTime="2026-01-31 15:18:18.496426505 +0000 UTC m=+1418.251164838" Jan 31 15:18:19 crc kubenswrapper[4763]: I0131 15:18:19.367416 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fdvt6" Jan 31 15:18:19 crc kubenswrapper[4763]: I0131 15:18:19.423277 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fdvt6"] Jan 31 15:18:21 crc kubenswrapper[4763]: I0131 15:18:21.324391 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fdvt6" podUID="5bf14829-3173-4bb3-9696-9f721465d757" containerName="registry-server" containerID="cri-o://53a71c2b15bce0cdce8c74ed72ce672d58c66bc19364680f17a65ab7856d47f0" gracePeriod=2 Jan 31 15:18:21 crc kubenswrapper[4763]: I0131 15:18:21.792122 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fdvt6" Jan 31 15:18:21 crc kubenswrapper[4763]: I0131 15:18:21.851067 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdrbb\" (UniqueName: \"kubernetes.io/projected/5bf14829-3173-4bb3-9696-9f721465d757-kube-api-access-hdrbb\") pod \"5bf14829-3173-4bb3-9696-9f721465d757\" (UID: \"5bf14829-3173-4bb3-9696-9f721465d757\") " Jan 31 15:18:21 crc kubenswrapper[4763]: I0131 15:18:21.851271 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bf14829-3173-4bb3-9696-9f721465d757-catalog-content\") pod \"5bf14829-3173-4bb3-9696-9f721465d757\" (UID: \"5bf14829-3173-4bb3-9696-9f721465d757\") " Jan 31 15:18:21 crc kubenswrapper[4763]: I0131 15:18:21.851313 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bf14829-3173-4bb3-9696-9f721465d757-utilities\") pod \"5bf14829-3173-4bb3-9696-9f721465d757\" (UID: \"5bf14829-3173-4bb3-9696-9f721465d757\") " Jan 31 15:18:21 crc kubenswrapper[4763]: I0131 15:18:21.853127 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bf14829-3173-4bb3-9696-9f721465d757-utilities" (OuterVolumeSpecName: "utilities") pod "5bf14829-3173-4bb3-9696-9f721465d757" (UID: "5bf14829-3173-4bb3-9696-9f721465d757"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:18:21 crc kubenswrapper[4763]: I0131 15:18:21.857688 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bf14829-3173-4bb3-9696-9f721465d757-kube-api-access-hdrbb" (OuterVolumeSpecName: "kube-api-access-hdrbb") pod "5bf14829-3173-4bb3-9696-9f721465d757" (UID: "5bf14829-3173-4bb3-9696-9f721465d757"). InnerVolumeSpecName "kube-api-access-hdrbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:18:21 crc kubenswrapper[4763]: I0131 15:18:21.874754 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bf14829-3173-4bb3-9696-9f721465d757-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5bf14829-3173-4bb3-9696-9f721465d757" (UID: "5bf14829-3173-4bb3-9696-9f721465d757"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:18:21 crc kubenswrapper[4763]: I0131 15:18:21.954427 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bf14829-3173-4bb3-9696-9f721465d757-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:21 crc kubenswrapper[4763]: I0131 15:18:21.954465 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bf14829-3173-4bb3-9696-9f721465d757-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:21 crc kubenswrapper[4763]: I0131 15:18:21.954483 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdrbb\" (UniqueName: \"kubernetes.io/projected/5bf14829-3173-4bb3-9696-9f721465d757-kube-api-access-hdrbb\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:22 crc kubenswrapper[4763]: I0131 15:18:22.339897 4763 generic.go:334] "Generic (PLEG): container finished" podID="5bf14829-3173-4bb3-9696-9f721465d757" containerID="53a71c2b15bce0cdce8c74ed72ce672d58c66bc19364680f17a65ab7856d47f0" exitCode=0 Jan 31 15:18:22 crc kubenswrapper[4763]: I0131 15:18:22.340025 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fdvt6" Jan 31 15:18:22 crc kubenswrapper[4763]: I0131 15:18:22.340348 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fdvt6" event={"ID":"5bf14829-3173-4bb3-9696-9f721465d757","Type":"ContainerDied","Data":"53a71c2b15bce0cdce8c74ed72ce672d58c66bc19364680f17a65ab7856d47f0"} Jan 31 15:18:22 crc kubenswrapper[4763]: I0131 15:18:22.340424 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fdvt6" event={"ID":"5bf14829-3173-4bb3-9696-9f721465d757","Type":"ContainerDied","Data":"5cc5ef5e63ad76ca638a72e06d1076b5d187e70eab209b3913d347757ee3caf2"} Jan 31 15:18:22 crc kubenswrapper[4763]: I0131 15:18:22.340466 4763 scope.go:117] "RemoveContainer" containerID="53a71c2b15bce0cdce8c74ed72ce672d58c66bc19364680f17a65ab7856d47f0" Jan 31 15:18:22 crc kubenswrapper[4763]: I0131 15:18:22.368172 4763 scope.go:117] "RemoveContainer" containerID="3c9f08651352eadf7e902ff1934fabd97c1c939a0cf09c6d4a4061e49fb297f2" Jan 31 15:18:22 crc kubenswrapper[4763]: I0131 15:18:22.422440 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fdvt6"] Jan 31 15:18:22 crc kubenswrapper[4763]: I0131 15:18:22.423851 4763 scope.go:117] "RemoveContainer" containerID="a52e4ec71f76618f2bd45eaeadeda4c6ef82478d57289deeeeb19fe736c611d2" Jan 31 15:18:22 crc kubenswrapper[4763]: I0131 15:18:22.436148 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fdvt6"] Jan 31 15:18:22 crc kubenswrapper[4763]: I0131 15:18:22.448795 4763 scope.go:117] "RemoveContainer" containerID="53a71c2b15bce0cdce8c74ed72ce672d58c66bc19364680f17a65ab7856d47f0" Jan 31 15:18:22 crc kubenswrapper[4763]: E0131 15:18:22.449469 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53a71c2b15bce0cdce8c74ed72ce672d58c66bc19364680f17a65ab7856d47f0\": container with ID starting with 53a71c2b15bce0cdce8c74ed72ce672d58c66bc19364680f17a65ab7856d47f0 not found: ID does not exist" containerID="53a71c2b15bce0cdce8c74ed72ce672d58c66bc19364680f17a65ab7856d47f0" Jan 31 15:18:22 crc kubenswrapper[4763]: I0131 15:18:22.449520 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53a71c2b15bce0cdce8c74ed72ce672d58c66bc19364680f17a65ab7856d47f0"} err="failed to get container status \"53a71c2b15bce0cdce8c74ed72ce672d58c66bc19364680f17a65ab7856d47f0\": rpc error: code = NotFound desc = could not find container \"53a71c2b15bce0cdce8c74ed72ce672d58c66bc19364680f17a65ab7856d47f0\": container with ID starting with 53a71c2b15bce0cdce8c74ed72ce672d58c66bc19364680f17a65ab7856d47f0 not found: ID does not exist" Jan 31 15:18:22 crc kubenswrapper[4763]: I0131 15:18:22.449555 4763 scope.go:117] "RemoveContainer" containerID="3c9f08651352eadf7e902ff1934fabd97c1c939a0cf09c6d4a4061e49fb297f2" Jan 31 15:18:22 crc kubenswrapper[4763]: E0131 15:18:22.450055 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c9f08651352eadf7e902ff1934fabd97c1c939a0cf09c6d4a4061e49fb297f2\": container with ID starting with 3c9f08651352eadf7e902ff1934fabd97c1c939a0cf09c6d4a4061e49fb297f2 not found: ID does not exist" containerID="3c9f08651352eadf7e902ff1934fabd97c1c939a0cf09c6d4a4061e49fb297f2" Jan 31 15:18:22 crc kubenswrapper[4763]: I0131 15:18:22.450162 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c9f08651352eadf7e902ff1934fabd97c1c939a0cf09c6d4a4061e49fb297f2"} err="failed to get container status \"3c9f08651352eadf7e902ff1934fabd97c1c939a0cf09c6d4a4061e49fb297f2\": rpc error: code = NotFound desc = could not find container \"3c9f08651352eadf7e902ff1934fabd97c1c939a0cf09c6d4a4061e49fb297f2\": container with ID starting with 3c9f08651352eadf7e902ff1934fabd97c1c939a0cf09c6d4a4061e49fb297f2 not found: ID does not exist" Jan 31 15:18:22 crc kubenswrapper[4763]: I0131 15:18:22.450243 4763 scope.go:117] "RemoveContainer" containerID="a52e4ec71f76618f2bd45eaeadeda4c6ef82478d57289deeeeb19fe736c611d2" Jan 31 15:18:22 crc kubenswrapper[4763]: E0131 15:18:22.450678 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a52e4ec71f76618f2bd45eaeadeda4c6ef82478d57289deeeeb19fe736c611d2\": container with ID starting with a52e4ec71f76618f2bd45eaeadeda4c6ef82478d57289deeeeb19fe736c611d2 not found: ID does not exist" containerID="a52e4ec71f76618f2bd45eaeadeda4c6ef82478d57289deeeeb19fe736c611d2" Jan 31 15:18:22 crc kubenswrapper[4763]: I0131 15:18:22.450756 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a52e4ec71f76618f2bd45eaeadeda4c6ef82478d57289deeeeb19fe736c611d2"} err="failed to get container status \"a52e4ec71f76618f2bd45eaeadeda4c6ef82478d57289deeeeb19fe736c611d2\": rpc error: code = NotFound desc = could not find container \"a52e4ec71f76618f2bd45eaeadeda4c6ef82478d57289deeeeb19fe736c611d2\": container with ID starting with a52e4ec71f76618f2bd45eaeadeda4c6ef82478d57289deeeeb19fe736c611d2 not found: ID does not exist" Jan 31 15:18:23 crc kubenswrapper[4763]: I0131 15:18:23.054904 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bf14829-3173-4bb3-9696-9f721465d757" path="/var/lib/kubelet/pods/5bf14829-3173-4bb3-9696-9f721465d757/volumes" Jan 31 15:18:44 crc kubenswrapper[4763]: I0131 15:18:44.177349 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:18:44 crc kubenswrapper[4763]: I0131 15:18:44.177824 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:19:14 crc kubenswrapper[4763]: I0131 15:19:14.177821 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:19:14 crc kubenswrapper[4763]: I0131 15:19:14.178598 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:19:16 crc kubenswrapper[4763]: E0131 15:19:16.610359 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-storage-0" podUID="7390eb43-2a86-4ad9-b504-6fca814daf1c" Jan 31 15:19:16 crc kubenswrapper[4763]: I0131 15:19:16.788908 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:19:17 crc kubenswrapper[4763]: E0131 15:19:17.514917 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" podUID="c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc" Jan 31 15:19:17 crc kubenswrapper[4763]: I0131 15:19:17.797112 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:19:21 crc kubenswrapper[4763]: I0131 15:19:21.597118 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:19:21 crc kubenswrapper[4763]: E0131 15:19:21.597315 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:19:21 crc kubenswrapper[4763]: E0131 15:19:21.597570 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:19:21 crc kubenswrapper[4763]: E0131 15:19:21.597630 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift podName:7390eb43-2a86-4ad9-b504-6fca814daf1c nodeName:}" failed. No retries permitted until 2026-01-31 15:21:23.597611615 +0000 UTC m=+1603.352349928 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift") pod "swift-storage-0" (UID: "7390eb43-2a86-4ad9-b504-6fca814daf1c") : configmap "swift-ring-files" not found Jan 31 15:19:22 crc kubenswrapper[4763]: I0131 15:19:22.506907 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:19:22 crc kubenswrapper[4763]: E0131 15:19:22.507098 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:19:22 crc kubenswrapper[4763]: E0131 15:19:22.507347 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-77c98d654c-29n65: configmap "swift-ring-files" not found Jan 31 15:19:22 crc kubenswrapper[4763]: E0131 15:19:22.507417 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift podName:c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc nodeName:}" failed. No retries permitted until 2026-01-31 15:21:24.507393967 +0000 UTC m=+1604.262132280 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift") pod "swift-proxy-77c98d654c-29n65" (UID: "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc") : configmap "swift-ring-files" not found Jan 31 15:19:41 crc kubenswrapper[4763]: I0131 15:19:41.989495 4763 scope.go:117] "RemoveContainer" containerID="6f58266364b48d61b8c86fc9324d5893f7c7c65dee6b93a4c08f45a44a010cb1" Jan 31 15:19:42 crc kubenswrapper[4763]: I0131 15:19:42.026131 4763 scope.go:117] "RemoveContainer" containerID="5c8011a9d18428d101c91127624cd31f0ce315158e322ff2f55edf51f5e08669" Jan 31 15:19:44 crc kubenswrapper[4763]: I0131 15:19:44.179975 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:19:44 crc kubenswrapper[4763]: I0131 15:19:44.180282 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:19:44 crc kubenswrapper[4763]: I0131 15:19:44.180336 4763 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 15:19:45 crc kubenswrapper[4763]: I0131 15:19:45.030645 4763 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053"} pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 15:19:45 crc kubenswrapper[4763]: I0131 15:19:45.030750 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" containerID="cri-o://b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" gracePeriod=600 Jan 31 15:19:45 crc kubenswrapper[4763]: I0131 15:19:45.107752 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/root-account-create-update-jh8vr"] Jan 31 15:19:45 crc kubenswrapper[4763]: I0131 15:19:45.134132 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/root-account-create-update-jh8vr"] Jan 31 15:19:45 crc kubenswrapper[4763]: E0131 15:19:45.155805 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:19:46 crc kubenswrapper[4763]: I0131 15:19:46.051788 4763 generic.go:334] "Generic (PLEG): container finished" podID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" exitCode=0 Jan 31 15:19:46 crc kubenswrapper[4763]: I0131 15:19:46.052034 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerDied","Data":"b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053"} Jan 31 15:19:46 crc kubenswrapper[4763]: I0131 15:19:46.052394 4763 scope.go:117] "RemoveContainer" containerID="ed9e66445ed11fb3e31f897826819c87a5cfa2eaf20ae073d2a90c461528b554" Jan 31 15:19:46 crc kubenswrapper[4763]: I0131 15:19:46.053783 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:19:46 crc kubenswrapper[4763]: E0131 15:19:46.054962 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:19:47 crc kubenswrapper[4763]: I0131 15:19:47.052504 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="196347d8-7892-4b32-8bc2-0127439a95f0" path="/var/lib/kubelet/pods/196347d8-7892-4b32-8bc2-0127439a95f0/volumes" Jan 31 15:20:00 crc kubenswrapper[4763]: I0131 15:20:00.042058 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:20:00 crc kubenswrapper[4763]: E0131 15:20:00.042855 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:20:13 crc kubenswrapper[4763]: I0131 15:20:13.045338 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:20:13 crc kubenswrapper[4763]: E0131 15:20:13.046147 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:20:15 crc kubenswrapper[4763]: I0131 15:20:15.964055 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l2zf2"] Jan 31 15:20:15 crc kubenswrapper[4763]: E0131 15:20:15.965446 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bf14829-3173-4bb3-9696-9f721465d757" containerName="extract-content" Jan 31 15:20:15 crc kubenswrapper[4763]: I0131 15:20:15.965548 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bf14829-3173-4bb3-9696-9f721465d757" containerName="extract-content" Jan 31 15:20:15 crc kubenswrapper[4763]: E0131 15:20:15.965631 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bf14829-3173-4bb3-9696-9f721465d757" containerName="registry-server" Jan 31 15:20:15 crc kubenswrapper[4763]: I0131 15:20:15.965723 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bf14829-3173-4bb3-9696-9f721465d757" containerName="registry-server" Jan 31 15:20:15 crc kubenswrapper[4763]: E0131 15:20:15.965822 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bf14829-3173-4bb3-9696-9f721465d757" containerName="extract-utilities" Jan 31 15:20:15 crc kubenswrapper[4763]: I0131 15:20:15.965897 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bf14829-3173-4bb3-9696-9f721465d757" containerName="extract-utilities" Jan 31 15:20:15 crc kubenswrapper[4763]: I0131 15:20:15.966115 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bf14829-3173-4bb3-9696-9f721465d757" containerName="registry-server" Jan 31 15:20:15 crc kubenswrapper[4763]: I0131 15:20:15.967397 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l2zf2" Jan 31 15:20:16 crc kubenswrapper[4763]: I0131 15:20:15.984823 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l2zf2"] Jan 31 15:20:16 crc kubenswrapper[4763]: I0131 15:20:16.081247 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9db86f1-db8a-45fd-9ffa-b2476ff8d085-utilities\") pod \"redhat-operators-l2zf2\" (UID: \"f9db86f1-db8a-45fd-9ffa-b2476ff8d085\") " pod="openshift-marketplace/redhat-operators-l2zf2" Jan 31 15:20:16 crc kubenswrapper[4763]: I0131 15:20:16.081298 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9db86f1-db8a-45fd-9ffa-b2476ff8d085-catalog-content\") pod \"redhat-operators-l2zf2\" (UID: \"f9db86f1-db8a-45fd-9ffa-b2476ff8d085\") " pod="openshift-marketplace/redhat-operators-l2zf2" Jan 31 15:20:16 crc kubenswrapper[4763]: I0131 15:20:16.081352 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zdhd\" (UniqueName: \"kubernetes.io/projected/f9db86f1-db8a-45fd-9ffa-b2476ff8d085-kube-api-access-6zdhd\") pod \"redhat-operators-l2zf2\" (UID: \"f9db86f1-db8a-45fd-9ffa-b2476ff8d085\") " pod="openshift-marketplace/redhat-operators-l2zf2" Jan 31 15:20:16 crc kubenswrapper[4763]: I0131 15:20:16.182296 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9db86f1-db8a-45fd-9ffa-b2476ff8d085-utilities\") pod \"redhat-operators-l2zf2\" (UID: \"f9db86f1-db8a-45fd-9ffa-b2476ff8d085\") " pod="openshift-marketplace/redhat-operators-l2zf2" Jan 31 15:20:16 crc kubenswrapper[4763]: I0131 15:20:16.182351 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9db86f1-db8a-45fd-9ffa-b2476ff8d085-catalog-content\") pod \"redhat-operators-l2zf2\" (UID: \"f9db86f1-db8a-45fd-9ffa-b2476ff8d085\") " pod="openshift-marketplace/redhat-operators-l2zf2" Jan 31 15:20:16 crc kubenswrapper[4763]: I0131 15:20:16.182399 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zdhd\" (UniqueName: \"kubernetes.io/projected/f9db86f1-db8a-45fd-9ffa-b2476ff8d085-kube-api-access-6zdhd\") pod \"redhat-operators-l2zf2\" (UID: \"f9db86f1-db8a-45fd-9ffa-b2476ff8d085\") " pod="openshift-marketplace/redhat-operators-l2zf2" Jan 31 15:20:16 crc kubenswrapper[4763]: I0131 15:20:16.182861 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9db86f1-db8a-45fd-9ffa-b2476ff8d085-utilities\") pod \"redhat-operators-l2zf2\" (UID: \"f9db86f1-db8a-45fd-9ffa-b2476ff8d085\") " pod="openshift-marketplace/redhat-operators-l2zf2" Jan 31 15:20:16 crc kubenswrapper[4763]: I0131 15:20:16.182954 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9db86f1-db8a-45fd-9ffa-b2476ff8d085-catalog-content\") pod \"redhat-operators-l2zf2\" (UID: \"f9db86f1-db8a-45fd-9ffa-b2476ff8d085\") " pod="openshift-marketplace/redhat-operators-l2zf2" Jan 31 15:20:16 crc kubenswrapper[4763]: I0131 15:20:16.203802 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zdhd\" (UniqueName: \"kubernetes.io/projected/f9db86f1-db8a-45fd-9ffa-b2476ff8d085-kube-api-access-6zdhd\") pod \"redhat-operators-l2zf2\" (UID: \"f9db86f1-db8a-45fd-9ffa-b2476ff8d085\") " pod="openshift-marketplace/redhat-operators-l2zf2" Jan 31 15:20:16 crc kubenswrapper[4763]: I0131 15:20:16.352468 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l2zf2" Jan 31 15:20:16 crc kubenswrapper[4763]: I0131 15:20:16.769399 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l2zf2"] Jan 31 15:20:17 crc kubenswrapper[4763]: I0131 15:20:17.295910 4763 generic.go:334] "Generic (PLEG): container finished" podID="f9db86f1-db8a-45fd-9ffa-b2476ff8d085" containerID="da34f9f7c50bbd1345e52de9e57a73748df285653e8d216bf22f537b94e406b4" exitCode=0 Jan 31 15:20:17 crc kubenswrapper[4763]: I0131 15:20:17.296011 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2zf2" event={"ID":"f9db86f1-db8a-45fd-9ffa-b2476ff8d085","Type":"ContainerDied","Data":"da34f9f7c50bbd1345e52de9e57a73748df285653e8d216bf22f537b94e406b4"} Jan 31 15:20:17 crc kubenswrapper[4763]: I0131 15:20:17.296980 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2zf2" event={"ID":"f9db86f1-db8a-45fd-9ffa-b2476ff8d085","Type":"ContainerStarted","Data":"ad9a9c84dd32b517aa4dbfc52a3a6a70335067ab4c3d7b4f79559ecde622b549"} Jan 31 15:20:18 crc kubenswrapper[4763]: I0131 15:20:18.306514 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2zf2" event={"ID":"f9db86f1-db8a-45fd-9ffa-b2476ff8d085","Type":"ContainerStarted","Data":"58514587e5bfd0cd80b5759bb1257c7217e0e34b545d0e1fa419eeecd09ebc8a"} Jan 31 15:20:19 crc kubenswrapper[4763]: I0131 15:20:19.315157 4763 generic.go:334] "Generic (PLEG): container finished" podID="f9db86f1-db8a-45fd-9ffa-b2476ff8d085" containerID="58514587e5bfd0cd80b5759bb1257c7217e0e34b545d0e1fa419eeecd09ebc8a" exitCode=0 Jan 31 15:20:19 crc kubenswrapper[4763]: I0131 15:20:19.315227 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2zf2" event={"ID":"f9db86f1-db8a-45fd-9ffa-b2476ff8d085","Type":"ContainerDied","Data":"58514587e5bfd0cd80b5759bb1257c7217e0e34b545d0e1fa419eeecd09ebc8a"} Jan 31 15:20:20 crc kubenswrapper[4763]: I0131 15:20:20.325577 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2zf2" event={"ID":"f9db86f1-db8a-45fd-9ffa-b2476ff8d085","Type":"ContainerStarted","Data":"6352387d652aae3447bd0c58ef69a79eb6cb93a13321db0b8055c2234def3349"} Jan 31 15:20:20 crc kubenswrapper[4763]: I0131 15:20:20.348434 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l2zf2" podStartSLOduration=2.947922591 podStartE2EDuration="5.348416584s" podCreationTimestamp="2026-01-31 15:20:15 +0000 UTC" firstStartedPulling="2026-01-31 15:20:17.296950848 +0000 UTC m=+1537.051689151" lastFinishedPulling="2026-01-31 15:20:19.697444851 +0000 UTC m=+1539.452183144" observedRunningTime="2026-01-31 15:20:20.343429763 +0000 UTC m=+1540.098168056" watchObservedRunningTime="2026-01-31 15:20:20.348416584 +0000 UTC m=+1540.103154877" Jan 31 15:20:25 crc kubenswrapper[4763]: I0131 15:20:25.042677 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:20:25 crc kubenswrapper[4763]: E0131 15:20:25.043581 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:20:26 crc kubenswrapper[4763]: I0131 15:20:26.353272 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l2zf2" Jan 31 15:20:26 crc kubenswrapper[4763]: I0131 15:20:26.354265 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l2zf2" Jan 31 15:20:27 crc kubenswrapper[4763]: I0131 15:20:27.414401 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l2zf2" podUID="f9db86f1-db8a-45fd-9ffa-b2476ff8d085" containerName="registry-server" probeResult="failure" output=< Jan 31 15:20:27 crc kubenswrapper[4763]: timeout: failed to connect service ":50051" within 1s Jan 31 15:20:27 crc kubenswrapper[4763]: > Jan 31 15:20:36 crc kubenswrapper[4763]: I0131 15:20:36.409276 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l2zf2" Jan 31 15:20:36 crc kubenswrapper[4763]: I0131 15:20:36.455574 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l2zf2" Jan 31 15:20:36 crc kubenswrapper[4763]: I0131 15:20:36.644228 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l2zf2"] Jan 31 15:20:37 crc kubenswrapper[4763]: I0131 15:20:37.460185 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l2zf2" podUID="f9db86f1-db8a-45fd-9ffa-b2476ff8d085" containerName="registry-server" containerID="cri-o://6352387d652aae3447bd0c58ef69a79eb6cb93a13321db0b8055c2234def3349" gracePeriod=2 Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.042948 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:20:38 crc kubenswrapper[4763]: E0131 15:20:38.043769 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.437206 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l2zf2" Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.469205 4763 generic.go:334] "Generic (PLEG): container finished" podID="f9db86f1-db8a-45fd-9ffa-b2476ff8d085" containerID="6352387d652aae3447bd0c58ef69a79eb6cb93a13321db0b8055c2234def3349" exitCode=0 Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.469262 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2zf2" event={"ID":"f9db86f1-db8a-45fd-9ffa-b2476ff8d085","Type":"ContainerDied","Data":"6352387d652aae3447bd0c58ef69a79eb6cb93a13321db0b8055c2234def3349"} Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.469292 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2zf2" event={"ID":"f9db86f1-db8a-45fd-9ffa-b2476ff8d085","Type":"ContainerDied","Data":"ad9a9c84dd32b517aa4dbfc52a3a6a70335067ab4c3d7b4f79559ecde622b549"} Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.469310 4763 scope.go:117] "RemoveContainer" containerID="6352387d652aae3447bd0c58ef69a79eb6cb93a13321db0b8055c2234def3349" Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.469449 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l2zf2" Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.492526 4763 scope.go:117] "RemoveContainer" containerID="58514587e5bfd0cd80b5759bb1257c7217e0e34b545d0e1fa419eeecd09ebc8a" Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.525924 4763 scope.go:117] "RemoveContainer" containerID="da34f9f7c50bbd1345e52de9e57a73748df285653e8d216bf22f537b94e406b4" Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.554712 4763 scope.go:117] "RemoveContainer" containerID="6352387d652aae3447bd0c58ef69a79eb6cb93a13321db0b8055c2234def3349" Jan 31 15:20:38 crc kubenswrapper[4763]: E0131 15:20:38.558181 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6352387d652aae3447bd0c58ef69a79eb6cb93a13321db0b8055c2234def3349\": container with ID starting with 6352387d652aae3447bd0c58ef69a79eb6cb93a13321db0b8055c2234def3349 not found: ID does not exist" containerID="6352387d652aae3447bd0c58ef69a79eb6cb93a13321db0b8055c2234def3349" Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.558223 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6352387d652aae3447bd0c58ef69a79eb6cb93a13321db0b8055c2234def3349"} err="failed to get container status \"6352387d652aae3447bd0c58ef69a79eb6cb93a13321db0b8055c2234def3349\": rpc error: code = NotFound desc = could not find container \"6352387d652aae3447bd0c58ef69a79eb6cb93a13321db0b8055c2234def3349\": container with ID starting with 6352387d652aae3447bd0c58ef69a79eb6cb93a13321db0b8055c2234def3349 not found: ID does not exist" Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.558255 4763 scope.go:117] "RemoveContainer" containerID="58514587e5bfd0cd80b5759bb1257c7217e0e34b545d0e1fa419eeecd09ebc8a" Jan 31 15:20:38 crc kubenswrapper[4763]: E0131 15:20:38.558945 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58514587e5bfd0cd80b5759bb1257c7217e0e34b545d0e1fa419eeecd09ebc8a\": container with ID starting with 58514587e5bfd0cd80b5759bb1257c7217e0e34b545d0e1fa419eeecd09ebc8a not found: ID does not exist" containerID="58514587e5bfd0cd80b5759bb1257c7217e0e34b545d0e1fa419eeecd09ebc8a" Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.558975 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58514587e5bfd0cd80b5759bb1257c7217e0e34b545d0e1fa419eeecd09ebc8a"} err="failed to get container status \"58514587e5bfd0cd80b5759bb1257c7217e0e34b545d0e1fa419eeecd09ebc8a\": rpc error: code = NotFound desc = could not find container \"58514587e5bfd0cd80b5759bb1257c7217e0e34b545d0e1fa419eeecd09ebc8a\": container with ID starting with 58514587e5bfd0cd80b5759bb1257c7217e0e34b545d0e1fa419eeecd09ebc8a not found: ID does not exist" Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.558989 4763 scope.go:117] "RemoveContainer" containerID="da34f9f7c50bbd1345e52de9e57a73748df285653e8d216bf22f537b94e406b4" Jan 31 15:20:38 crc kubenswrapper[4763]: E0131 15:20:38.559427 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da34f9f7c50bbd1345e52de9e57a73748df285653e8d216bf22f537b94e406b4\": container with ID starting with da34f9f7c50bbd1345e52de9e57a73748df285653e8d216bf22f537b94e406b4 not found: ID does not exist" containerID="da34f9f7c50bbd1345e52de9e57a73748df285653e8d216bf22f537b94e406b4" Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.559456 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da34f9f7c50bbd1345e52de9e57a73748df285653e8d216bf22f537b94e406b4"} err="failed to get container status \"da34f9f7c50bbd1345e52de9e57a73748df285653e8d216bf22f537b94e406b4\": rpc error: code = NotFound desc = could not find container \"da34f9f7c50bbd1345e52de9e57a73748df285653e8d216bf22f537b94e406b4\": container with ID starting with da34f9f7c50bbd1345e52de9e57a73748df285653e8d216bf22f537b94e406b4 not found: ID does not exist" Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.600523 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9db86f1-db8a-45fd-9ffa-b2476ff8d085-utilities\") pod \"f9db86f1-db8a-45fd-9ffa-b2476ff8d085\" (UID: \"f9db86f1-db8a-45fd-9ffa-b2476ff8d085\") " Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.600611 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9db86f1-db8a-45fd-9ffa-b2476ff8d085-catalog-content\") pod \"f9db86f1-db8a-45fd-9ffa-b2476ff8d085\" (UID: \"f9db86f1-db8a-45fd-9ffa-b2476ff8d085\") " Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.600739 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zdhd\" (UniqueName: \"kubernetes.io/projected/f9db86f1-db8a-45fd-9ffa-b2476ff8d085-kube-api-access-6zdhd\") pod \"f9db86f1-db8a-45fd-9ffa-b2476ff8d085\" (UID: \"f9db86f1-db8a-45fd-9ffa-b2476ff8d085\") " Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.601663 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9db86f1-db8a-45fd-9ffa-b2476ff8d085-utilities" (OuterVolumeSpecName: "utilities") pod "f9db86f1-db8a-45fd-9ffa-b2476ff8d085" (UID: "f9db86f1-db8a-45fd-9ffa-b2476ff8d085"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.606370 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9db86f1-db8a-45fd-9ffa-b2476ff8d085-kube-api-access-6zdhd" (OuterVolumeSpecName: "kube-api-access-6zdhd") pod "f9db86f1-db8a-45fd-9ffa-b2476ff8d085" (UID: "f9db86f1-db8a-45fd-9ffa-b2476ff8d085"). InnerVolumeSpecName "kube-api-access-6zdhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.702580 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9db86f1-db8a-45fd-9ffa-b2476ff8d085-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.702646 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zdhd\" (UniqueName: \"kubernetes.io/projected/f9db86f1-db8a-45fd-9ffa-b2476ff8d085-kube-api-access-6zdhd\") on node \"crc\" DevicePath \"\"" Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.764650 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9db86f1-db8a-45fd-9ffa-b2476ff8d085-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9db86f1-db8a-45fd-9ffa-b2476ff8d085" (UID: "f9db86f1-db8a-45fd-9ffa-b2476ff8d085"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.817136 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9db86f1-db8a-45fd-9ffa-b2476ff8d085-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.821182 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l2zf2"] Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.832098 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l2zf2"] Jan 31 15:20:39 crc kubenswrapper[4763]: I0131 15:20:39.052554 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9db86f1-db8a-45fd-9ffa-b2476ff8d085" path="/var/lib/kubelet/pods/f9db86f1-db8a-45fd-9ffa-b2476ff8d085/volumes" Jan 31 15:20:42 crc kubenswrapper[4763]: I0131 15:20:42.103799 4763 scope.go:117] "RemoveContainer" containerID="203b36129261a511c80fe2b8e1a92066fc0b81cc45cfe796a72d0edaa9da1993" Jan 31 15:20:42 crc kubenswrapper[4763]: I0131 15:20:42.129415 4763 scope.go:117] "RemoveContainer" containerID="84fd8b869419477646b3303789d7d4ce59277c7782af2bb140c22752aadb6987" Jan 31 15:20:51 crc kubenswrapper[4763]: I0131 15:20:51.047360 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:20:51 crc kubenswrapper[4763]: E0131 15:20:51.048301 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:21:00 crc kubenswrapper[4763]: I0131 15:21:00.039748 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystone-38d2-account-create-update-cpmns"] Jan 31 15:21:00 crc kubenswrapper[4763]: I0131 15:21:00.046544 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystone-db-create-xn2nh"] Jan 31 15:21:00 crc kubenswrapper[4763]: I0131 15:21:00.056712 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/keystone-db-create-xn2nh"] Jan 31 15:21:00 crc kubenswrapper[4763]: I0131 15:21:00.066835 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/keystone-38d2-account-create-update-cpmns"] Jan 31 15:21:01 crc kubenswrapper[4763]: I0131 15:21:01.054800 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1994b227-dbc6-494a-886d-4573eee02640" path="/var/lib/kubelet/pods/1994b227-dbc6-494a-886d-4573eee02640/volumes" Jan 31 15:21:01 crc kubenswrapper[4763]: I0131 15:21:01.055330 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a1a2199-bf73-476a-8a6b-c50b1c26aa6c" path="/var/lib/kubelet/pods/4a1a2199-bf73-476a-8a6b-c50b1c26aa6c/volumes" Jan 31 15:21:04 crc kubenswrapper[4763]: I0131 15:21:04.042195 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:21:04 crc kubenswrapper[4763]: E0131 15:21:04.042668 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:21:17 crc kubenswrapper[4763]: I0131 15:21:17.049644 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystone-db-sync-cfz59"] Jan 31 15:21:17 crc kubenswrapper[4763]: I0131 15:21:17.054387 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/keystone-db-sync-cfz59"] Jan 31 15:21:18 crc kubenswrapper[4763]: I0131 15:21:18.042361 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:21:18 crc kubenswrapper[4763]: E0131 15:21:18.043203 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:21:19 crc kubenswrapper[4763]: I0131 15:21:19.055156 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="576fbbd2-e600-40a9-95f4-2772c96807f1" path="/var/lib/kubelet/pods/576fbbd2-e600-40a9-95f4-2772c96807f1/volumes" Jan 31 15:21:19 crc kubenswrapper[4763]: E0131 15:21:19.791294 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-storage-0" podUID="7390eb43-2a86-4ad9-b504-6fca814daf1c" Jan 31 15:21:19 crc kubenswrapper[4763]: I0131 15:21:19.844534 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:21:20 crc kubenswrapper[4763]: E0131 15:21:20.798008 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" podUID="c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc" Jan 31 15:21:20 crc kubenswrapper[4763]: I0131 15:21:20.853310 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:21:23 crc kubenswrapper[4763]: I0131 15:21:23.639871 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:21:23 crc kubenswrapper[4763]: E0131 15:21:23.640084 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:21:23 crc kubenswrapper[4763]: E0131 15:21:23.640114 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:21:23 crc kubenswrapper[4763]: E0131 15:21:23.640205 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift podName:7390eb43-2a86-4ad9-b504-6fca814daf1c nodeName:}" failed. No retries permitted until 2026-01-31 15:23:25.64017921 +0000 UTC m=+1725.394917533 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift") pod "swift-storage-0" (UID: "7390eb43-2a86-4ad9-b504-6fca814daf1c") : configmap "swift-ring-files" not found Jan 31 15:21:24 crc kubenswrapper[4763]: I0131 15:21:24.037012 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystone-bootstrap-tjb5m"] Jan 31 15:21:24 crc kubenswrapper[4763]: I0131 15:21:24.045362 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/keystone-bootstrap-tjb5m"] Jan 31 15:21:24 crc kubenswrapper[4763]: I0131 15:21:24.555301 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:21:24 crc kubenswrapper[4763]: E0131 15:21:24.555505 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:21:24 crc kubenswrapper[4763]: E0131 15:21:24.555819 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-77c98d654c-29n65: configmap "swift-ring-files" not found Jan 31 15:21:24 crc kubenswrapper[4763]: E0131 15:21:24.555905 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift podName:c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc nodeName:}" failed. No retries permitted until 2026-01-31 15:23:26.555879382 +0000 UTC m=+1726.310617705 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift") pod "swift-proxy-77c98d654c-29n65" (UID: "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc") : configmap "swift-ring-files" not found Jan 31 15:21:25 crc kubenswrapper[4763]: I0131 15:21:25.065309 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b3e2d68-7406-4653-85ed-41746d3a6ea7" path="/var/lib/kubelet/pods/9b3e2d68-7406-4653-85ed-41746d3a6ea7/volumes" Jan 31 15:21:29 crc kubenswrapper[4763]: I0131 15:21:29.041655 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:21:29 crc kubenswrapper[4763]: E0131 15:21:29.042518 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:21:40 crc kubenswrapper[4763]: I0131 15:21:40.041607 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:21:40 crc kubenswrapper[4763]: E0131 15:21:40.043336 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:21:42 crc kubenswrapper[4763]: I0131 15:21:42.200786 4763 scope.go:117] "RemoveContainer" containerID="dc43ea79353044d825fa92732b826ecb7bb9d81d79e11fde2b7c3d0258701fc2" Jan 31 15:21:42 crc kubenswrapper[4763]: I0131 15:21:42.225930 4763 scope.go:117] "RemoveContainer" containerID="8180dd910ac6efc8ad5f3d6dcb80cc45662cc7fb7c88809c730b03aa35ba8bc3" Jan 31 15:21:42 crc kubenswrapper[4763]: I0131 15:21:42.271940 4763 scope.go:117] "RemoveContainer" containerID="22a73cc01e38d3368c2378a8856a884268bdaab9f5443ff62ac66e26d223ed89" Jan 31 15:21:42 crc kubenswrapper[4763]: I0131 15:21:42.304397 4763 scope.go:117] "RemoveContainer" containerID="d4b2fbc8cb2358be37b442ce5253eaaf842807de68a087674a3ea1292f2dd38e" Jan 31 15:21:42 crc kubenswrapper[4763]: I0131 15:21:42.333300 4763 scope.go:117] "RemoveContainer" containerID="6b27f13fa86685c4b37caba09090beecec2d3e1290d084b6ae1cf269665b318e" Jan 31 15:21:42 crc kubenswrapper[4763]: I0131 15:21:42.377354 4763 scope.go:117] "RemoveContainer" containerID="411cd9ca106798ab981b147bd785e0f4defae6f019c51c98fdfff57480304b59" Jan 31 15:21:53 crc kubenswrapper[4763]: I0131 15:21:53.042548 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:21:53 crc kubenswrapper[4763]: E0131 15:21:53.043899 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:22:00 crc kubenswrapper[4763]: I0131 15:22:00.048112 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/barbican-db-create-fxrtm"] Jan 31 15:22:00 crc kubenswrapper[4763]: I0131 15:22:00.054362 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/barbican-0181-account-create-update-96pj2"] Jan 31 15:22:00 crc kubenswrapper[4763]: I0131 15:22:00.060670 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/barbican-db-create-fxrtm"] Jan 31 15:22:00 crc kubenswrapper[4763]: I0131 15:22:00.066489 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/barbican-0181-account-create-update-96pj2"] Jan 31 15:22:01 crc kubenswrapper[4763]: I0131 15:22:01.050410 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="464b92bd-fb87-4fc5-aa90-5460b1e35eec" path="/var/lib/kubelet/pods/464b92bd-fb87-4fc5-aa90-5460b1e35eec/volumes" Jan 31 15:22:01 crc kubenswrapper[4763]: I0131 15:22:01.051321 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be55f4fd-e12f-4bcf-ab19-d71977f3e6ec" path="/var/lib/kubelet/pods/be55f4fd-e12f-4bcf-ab19-d71977f3e6ec/volumes" Jan 31 15:22:08 crc kubenswrapper[4763]: I0131 15:22:08.042523 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:22:08 crc kubenswrapper[4763]: E0131 15:22:08.043566 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:22:21 crc kubenswrapper[4763]: I0131 15:22:21.048659 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:22:21 crc kubenswrapper[4763]: E0131 15:22:21.049811 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:22:36 crc kubenswrapper[4763]: I0131 15:22:36.041844 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:22:36 crc kubenswrapper[4763]: E0131 15:22:36.043117 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.494175 4763 scope.go:117] "RemoveContainer" containerID="4007498dac8fd3b8123d95c0037cc87ebfb6676c6920014087622d824443355a" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.516749 4763 scope.go:117] "RemoveContainer" containerID="a193fbbc6d4c1b876140e857534a3e3476054468d680fb5032ef95fc43449eec" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.538060 4763 scope.go:117] "RemoveContainer" containerID="6436423e21963e2b944d7661270d579b854013f8d01c88fd036a9b4a15a61846" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.553478 4763 scope.go:117] "RemoveContainer" containerID="2a92b2bd4fc8d6dca2fcd21333d8ea9dc1bea550c48e4c4023603133141572ad" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.571753 4763 scope.go:117] "RemoveContainer" containerID="aa5b58c7ff93f3c15da7f7f96ecebd65245380f1df8d802df393a3683573f42b" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.596079 4763 scope.go:117] "RemoveContainer" containerID="aa4453d984b8623efdeb6c14caeec9684c9789a6bbf3f070fda2ae53d211bc67" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.610518 4763 scope.go:117] "RemoveContainer" containerID="9716359f4da3a71b9390a156a58d83e5742038cd7c70d2aaa6dddd57c4c7402f" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.626372 4763 scope.go:117] "RemoveContainer" containerID="18e785ee82660570c7a3c1f8a825fe58e5562959092c137ca7e8ad12f67b2cdf" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.642180 4763 scope.go:117] "RemoveContainer" containerID="d21b593c360d355cf6f976e7a33dd5c9a7af1da589078440cac056c5b3195552" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.654819 4763 scope.go:117] "RemoveContainer" containerID="539252f99bc3e6264ace92eb1a171fa552e64dc1e5ab28a67e1806e3665008b7" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.671896 4763 scope.go:117] "RemoveContainer" containerID="36471f31ca52c785e38c6b2b66ccd5857f6ef478d9ab8974c38189fbf0e27a7c" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.691184 4763 scope.go:117] "RemoveContainer" containerID="a540ed0c915c9ec8346e959a99b0e8cef75297ffb67063cbd5e427a00b227441" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.708008 4763 scope.go:117] "RemoveContainer" containerID="fee6200b81ed36139edc76fff1de6f650a35a10f71c9521569ecb3d4c7be34df" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.732974 4763 scope.go:117] "RemoveContainer" containerID="433775c4538892b2b06557027ca728e6b8b86916941810cde9bf0aaa7cec78dd" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.756859 4763 scope.go:117] "RemoveContainer" containerID="5de93fda3e4eeec09c3e0cab0c53f8bd9bc5a576f08e2f6a3e358bb501d0aee7" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.771658 4763 scope.go:117] "RemoveContainer" containerID="0df182416b7bb3077be071e835d5e5e14d5a8b304cf30505e3ab3257400dd215" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.789102 4763 scope.go:117] "RemoveContainer" containerID="7de2f6d754dc155c1220dbc7207385fffee69382d5745f470315b5df89030e55" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.813801 4763 scope.go:117] "RemoveContainer" containerID="c60e5d86263ce2d232f363b5e6d0ca6b837b598ceba6f85ee0bd89925cf0c6dd" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.835207 4763 scope.go:117] "RemoveContainer" containerID="a341e6df1a37494965c9886e4a13005e2a4f651428e838086726ac3163d9cf3e" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.853293 4763 scope.go:117] "RemoveContainer" containerID="811bfef1f45e1c8bfb983ccdb9950299ee77652d3e6b07f11a1f38dcaa006989" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.865656 4763 scope.go:117] "RemoveContainer" containerID="46bb08319fc51c2ac2e298b3d88809c97e2206ce13b2762bb97ee19fa37761d9" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.882296 4763 scope.go:117] "RemoveContainer" containerID="8743be8babd948dd0c5cadcbb327888a64b81d91784dcd88e4edc80760703747" Jan 31 15:22:51 crc kubenswrapper[4763]: I0131 15:22:51.047638 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:22:51 crc kubenswrapper[4763]: E0131 15:22:51.048368 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:23:05 crc kubenswrapper[4763]: I0131 15:23:05.042575 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:23:05 crc kubenswrapper[4763]: E0131 15:23:05.043186 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:23:19 crc kubenswrapper[4763]: I0131 15:23:19.041562 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:23:19 crc kubenswrapper[4763]: E0131 15:23:19.042597 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:23:22 crc kubenswrapper[4763]: E0131 15:23:22.846852 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-storage-0" podUID="7390eb43-2a86-4ad9-b504-6fca814daf1c" Jan 31 15:23:22 crc kubenswrapper[4763]: I0131 15:23:22.916531 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:23:23 crc kubenswrapper[4763]: E0131 15:23:23.855345 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" podUID="c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc" Jan 31 15:23:23 crc kubenswrapper[4763]: I0131 15:23:23.925418 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:23:25 crc kubenswrapper[4763]: I0131 15:23:25.692263 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:23:25 crc kubenswrapper[4763]: E0131 15:23:25.692449 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:23:25 crc kubenswrapper[4763]: E0131 15:23:25.692716 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:23:25 crc kubenswrapper[4763]: E0131 15:23:25.692767 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift podName:7390eb43-2a86-4ad9-b504-6fca814daf1c nodeName:}" failed. No retries permitted until 2026-01-31 15:25:27.692750066 +0000 UTC m=+1847.447488359 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift") pod "swift-storage-0" (UID: "7390eb43-2a86-4ad9-b504-6fca814daf1c") : configmap "swift-ring-files" not found Jan 31 15:23:26 crc kubenswrapper[4763]: I0131 15:23:26.603791 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:23:26 crc kubenswrapper[4763]: E0131 15:23:26.603990 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:23:26 crc kubenswrapper[4763]: E0131 15:23:26.604007 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-77c98d654c-29n65: configmap "swift-ring-files" not found Jan 31 15:23:26 crc kubenswrapper[4763]: E0131 15:23:26.604069 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift podName:c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc nodeName:}" failed. No retries permitted until 2026-01-31 15:25:28.604052675 +0000 UTC m=+1848.358790968 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift") pod "swift-proxy-77c98d654c-29n65" (UID: "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc") : configmap "swift-ring-files" not found Jan 31 15:23:30 crc kubenswrapper[4763]: I0131 15:23:30.041329 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:23:30 crc kubenswrapper[4763]: E0131 15:23:30.042003 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:23:43 crc kubenswrapper[4763]: I0131 15:23:43.042727 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:23:43 crc kubenswrapper[4763]: E0131 15:23:43.044015 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:23:56 crc kubenswrapper[4763]: I0131 15:23:56.041896 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:23:56 crc kubenswrapper[4763]: E0131 15:23:56.044893 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:24:08 crc kubenswrapper[4763]: I0131 15:24:08.042036 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:24:08 crc kubenswrapper[4763]: E0131 15:24:08.043008 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:24:22 crc kubenswrapper[4763]: I0131 15:24:22.045426 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:24:22 crc kubenswrapper[4763]: E0131 15:24:22.046787 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:24:35 crc kubenswrapper[4763]: I0131 15:24:35.043129 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:24:35 crc kubenswrapper[4763]: E0131 15:24:35.044216 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:24:47 crc kubenswrapper[4763]: I0131 15:24:47.042264 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:24:47 crc kubenswrapper[4763]: I0131 15:24:47.649581 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerStarted","Data":"2bfabe5df54337c4196a35b7009ba75d2a72e0f1ccf13fd0226987ae1c4b7c11"} Jan 31 15:25:25 crc kubenswrapper[4763]: E0131 15:25:25.918566 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-storage-0" podUID="7390eb43-2a86-4ad9-b504-6fca814daf1c" Jan 31 15:25:25 crc kubenswrapper[4763]: I0131 15:25:25.982048 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:25:26 crc kubenswrapper[4763]: E0131 15:25:26.927842 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" podUID="c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc" Jan 31 15:25:26 crc kubenswrapper[4763]: I0131 15:25:26.990679 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:25:27 crc kubenswrapper[4763]: I0131 15:25:27.753115 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:25:27 crc kubenswrapper[4763]: E0131 15:25:27.753273 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:25:27 crc kubenswrapper[4763]: E0131 15:25:27.753286 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:25:27 crc kubenswrapper[4763]: E0131 15:25:27.753329 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift podName:7390eb43-2a86-4ad9-b504-6fca814daf1c nodeName:}" failed. No retries permitted until 2026-01-31 15:27:29.753317846 +0000 UTC m=+1969.508056139 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift") pod "swift-storage-0" (UID: "7390eb43-2a86-4ad9-b504-6fca814daf1c") : configmap "swift-ring-files" not found Jan 31 15:25:28 crc kubenswrapper[4763]: I0131 15:25:28.669274 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:25:28 crc kubenswrapper[4763]: E0131 15:25:28.669475 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:25:28 crc kubenswrapper[4763]: E0131 15:25:28.669509 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-77c98d654c-29n65: configmap "swift-ring-files" not found Jan 31 15:25:28 crc kubenswrapper[4763]: E0131 15:25:28.669594 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift podName:c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc nodeName:}" failed. No retries permitted until 2026-01-31 15:27:30.669568264 +0000 UTC m=+1970.424306587 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift") pod "swift-proxy-77c98d654c-29n65" (UID: "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc") : configmap "swift-ring-files" not found Jan 31 15:25:39 crc kubenswrapper[4763]: I0131 15:25:39.368989 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5fm2z"] Jan 31 15:25:39 crc kubenswrapper[4763]: E0131 15:25:39.369611 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9db86f1-db8a-45fd-9ffa-b2476ff8d085" containerName="extract-content" Jan 31 15:25:39 crc kubenswrapper[4763]: I0131 15:25:39.369622 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9db86f1-db8a-45fd-9ffa-b2476ff8d085" containerName="extract-content" Jan 31 15:25:39 crc kubenswrapper[4763]: E0131 15:25:39.369636 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9db86f1-db8a-45fd-9ffa-b2476ff8d085" containerName="extract-utilities" Jan 31 15:25:39 crc kubenswrapper[4763]: I0131 15:25:39.369643 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9db86f1-db8a-45fd-9ffa-b2476ff8d085" containerName="extract-utilities" Jan 31 15:25:39 crc kubenswrapper[4763]: E0131 15:25:39.369657 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9db86f1-db8a-45fd-9ffa-b2476ff8d085" containerName="registry-server" Jan 31 15:25:39 crc kubenswrapper[4763]: I0131 15:25:39.369664 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9db86f1-db8a-45fd-9ffa-b2476ff8d085" containerName="registry-server" Jan 31 15:25:39 crc kubenswrapper[4763]: I0131 15:25:39.369801 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9db86f1-db8a-45fd-9ffa-b2476ff8d085" containerName="registry-server" Jan 31 15:25:39 crc kubenswrapper[4763]: I0131 15:25:39.370641 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5fm2z" Jan 31 15:25:39 crc kubenswrapper[4763]: I0131 15:25:39.399522 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5fm2z"] Jan 31 15:25:39 crc kubenswrapper[4763]: I0131 15:25:39.463220 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74-utilities\") pod \"community-operators-5fm2z\" (UID: \"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74\") " pod="openshift-marketplace/community-operators-5fm2z" Jan 31 15:25:39 crc kubenswrapper[4763]: I0131 15:25:39.463290 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtmxm\" (UniqueName: \"kubernetes.io/projected/cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74-kube-api-access-dtmxm\") pod \"community-operators-5fm2z\" (UID: \"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74\") " pod="openshift-marketplace/community-operators-5fm2z" Jan 31 15:25:39 crc kubenswrapper[4763]: I0131 15:25:39.463317 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74-catalog-content\") pod \"community-operators-5fm2z\" (UID: \"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74\") " pod="openshift-marketplace/community-operators-5fm2z" Jan 31 15:25:39 crc kubenswrapper[4763]: I0131 15:25:39.565231 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74-utilities\") pod \"community-operators-5fm2z\" (UID: \"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74\") " pod="openshift-marketplace/community-operators-5fm2z" Jan 31 15:25:39 crc kubenswrapper[4763]: I0131 15:25:39.565547 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtmxm\" (UniqueName: \"kubernetes.io/projected/cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74-kube-api-access-dtmxm\") pod \"community-operators-5fm2z\" (UID: \"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74\") " pod="openshift-marketplace/community-operators-5fm2z" Jan 31 15:25:39 crc kubenswrapper[4763]: I0131 15:25:39.565645 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74-catalog-content\") pod \"community-operators-5fm2z\" (UID: \"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74\") " pod="openshift-marketplace/community-operators-5fm2z" Jan 31 15:25:39 crc kubenswrapper[4763]: I0131 15:25:39.566234 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74-catalog-content\") pod \"community-operators-5fm2z\" (UID: \"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74\") " pod="openshift-marketplace/community-operators-5fm2z" Jan 31 15:25:39 crc kubenswrapper[4763]: I0131 15:25:39.566344 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74-utilities\") pod \"community-operators-5fm2z\" (UID: \"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74\") " pod="openshift-marketplace/community-operators-5fm2z" Jan 31 15:25:39 crc kubenswrapper[4763]: I0131 15:25:39.589222 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtmxm\" (UniqueName: \"kubernetes.io/projected/cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74-kube-api-access-dtmxm\") pod \"community-operators-5fm2z\" (UID: \"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74\") " pod="openshift-marketplace/community-operators-5fm2z" Jan 31 15:25:39 crc kubenswrapper[4763]: I0131 15:25:39.690839 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5fm2z" Jan 31 15:25:40 crc kubenswrapper[4763]: I0131 15:25:40.192339 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5fm2z"] Jan 31 15:25:41 crc kubenswrapper[4763]: I0131 15:25:41.116419 4763 generic.go:334] "Generic (PLEG): container finished" podID="cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74" containerID="4bb8a75f0306dd74eceb1adaa955bf5bd177901f0843665d95f8845cd830fa7f" exitCode=0 Jan 31 15:25:41 crc kubenswrapper[4763]: I0131 15:25:41.116509 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5fm2z" event={"ID":"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74","Type":"ContainerDied","Data":"4bb8a75f0306dd74eceb1adaa955bf5bd177901f0843665d95f8845cd830fa7f"} Jan 31 15:25:41 crc kubenswrapper[4763]: I0131 15:25:41.118057 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5fm2z" event={"ID":"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74","Type":"ContainerStarted","Data":"071f2fd9f37cb5bf2b59bf23debbd381506e38b2b6d8f101edfb4c67a327bfe5"} Jan 31 15:25:41 crc kubenswrapper[4763]: I0131 15:25:41.121143 4763 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 15:25:41 crc kubenswrapper[4763]: I0131 15:25:41.181831 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mb7gx"] Jan 31 15:25:41 crc kubenswrapper[4763]: I0131 15:25:41.183070 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mb7gx" Jan 31 15:25:41 crc kubenswrapper[4763]: I0131 15:25:41.204233 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mb7gx"] Jan 31 15:25:41 crc kubenswrapper[4763]: I0131 15:25:41.292136 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cz8t\" (UniqueName: \"kubernetes.io/projected/cb8791e5-8470-4f7e-bf4b-ef5f031b179e-kube-api-access-8cz8t\") pod \"certified-operators-mb7gx\" (UID: \"cb8791e5-8470-4f7e-bf4b-ef5f031b179e\") " pod="openshift-marketplace/certified-operators-mb7gx" Jan 31 15:25:41 crc kubenswrapper[4763]: I0131 15:25:41.292893 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb8791e5-8470-4f7e-bf4b-ef5f031b179e-utilities\") pod \"certified-operators-mb7gx\" (UID: \"cb8791e5-8470-4f7e-bf4b-ef5f031b179e\") " pod="openshift-marketplace/certified-operators-mb7gx" Jan 31 15:25:41 crc kubenswrapper[4763]: I0131 15:25:41.293089 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb8791e5-8470-4f7e-bf4b-ef5f031b179e-catalog-content\") pod \"certified-operators-mb7gx\" (UID: \"cb8791e5-8470-4f7e-bf4b-ef5f031b179e\") " pod="openshift-marketplace/certified-operators-mb7gx" Jan 31 15:25:41 crc kubenswrapper[4763]: I0131 15:25:41.394799 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb8791e5-8470-4f7e-bf4b-ef5f031b179e-catalog-content\") pod \"certified-operators-mb7gx\" (UID: \"cb8791e5-8470-4f7e-bf4b-ef5f031b179e\") " pod="openshift-marketplace/certified-operators-mb7gx" Jan 31 15:25:41 crc kubenswrapper[4763]: I0131 15:25:41.394893 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cz8t\" (UniqueName: \"kubernetes.io/projected/cb8791e5-8470-4f7e-bf4b-ef5f031b179e-kube-api-access-8cz8t\") pod \"certified-operators-mb7gx\" (UID: \"cb8791e5-8470-4f7e-bf4b-ef5f031b179e\") " pod="openshift-marketplace/certified-operators-mb7gx" Jan 31 15:25:41 crc kubenswrapper[4763]: I0131 15:25:41.395000 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb8791e5-8470-4f7e-bf4b-ef5f031b179e-utilities\") pod \"certified-operators-mb7gx\" (UID: \"cb8791e5-8470-4f7e-bf4b-ef5f031b179e\") " pod="openshift-marketplace/certified-operators-mb7gx" Jan 31 15:25:41 crc kubenswrapper[4763]: I0131 15:25:41.395434 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb8791e5-8470-4f7e-bf4b-ef5f031b179e-catalog-content\") pod \"certified-operators-mb7gx\" (UID: \"cb8791e5-8470-4f7e-bf4b-ef5f031b179e\") " pod="openshift-marketplace/certified-operators-mb7gx" Jan 31 15:25:41 crc kubenswrapper[4763]: I0131 15:25:41.395538 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb8791e5-8470-4f7e-bf4b-ef5f031b179e-utilities\") pod \"certified-operators-mb7gx\" (UID: \"cb8791e5-8470-4f7e-bf4b-ef5f031b179e\") " pod="openshift-marketplace/certified-operators-mb7gx" Jan 31 15:25:41 crc kubenswrapper[4763]: I0131 15:25:41.422504 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cz8t\" (UniqueName: \"kubernetes.io/projected/cb8791e5-8470-4f7e-bf4b-ef5f031b179e-kube-api-access-8cz8t\") pod \"certified-operators-mb7gx\" (UID: \"cb8791e5-8470-4f7e-bf4b-ef5f031b179e\") " pod="openshift-marketplace/certified-operators-mb7gx" Jan 31 15:25:41 crc kubenswrapper[4763]: I0131 15:25:41.500686 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mb7gx" Jan 31 15:25:41 crc kubenswrapper[4763]: I0131 15:25:41.766133 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mb7gx"] Jan 31 15:25:41 crc kubenswrapper[4763]: W0131 15:25:41.767451 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb8791e5_8470_4f7e_bf4b_ef5f031b179e.slice/crio-cff51285f5c8618d9bb64cc7ac41f7d2aa2e96174e4b0d0fdf2766d3cd566f91 WatchSource:0}: Error finding container cff51285f5c8618d9bb64cc7ac41f7d2aa2e96174e4b0d0fdf2766d3cd566f91: Status 404 returned error can't find the container with id cff51285f5c8618d9bb64cc7ac41f7d2aa2e96174e4b0d0fdf2766d3cd566f91 Jan 31 15:25:42 crc kubenswrapper[4763]: I0131 15:25:42.130231 4763 generic.go:334] "Generic (PLEG): container finished" podID="cb8791e5-8470-4f7e-bf4b-ef5f031b179e" containerID="821a71c5343122b8b36f7588bf54af973ed519fa93bc250474d5bcb59d1f699d" exitCode=0 Jan 31 15:25:42 crc kubenswrapper[4763]: I0131 15:25:42.130294 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mb7gx" event={"ID":"cb8791e5-8470-4f7e-bf4b-ef5f031b179e","Type":"ContainerDied","Data":"821a71c5343122b8b36f7588bf54af973ed519fa93bc250474d5bcb59d1f699d"} Jan 31 15:25:42 crc kubenswrapper[4763]: I0131 15:25:42.130637 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mb7gx" event={"ID":"cb8791e5-8470-4f7e-bf4b-ef5f031b179e","Type":"ContainerStarted","Data":"cff51285f5c8618d9bb64cc7ac41f7d2aa2e96174e4b0d0fdf2766d3cd566f91"} Jan 31 15:25:43 crc kubenswrapper[4763]: I0131 15:25:43.146545 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5fm2z" event={"ID":"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74","Type":"ContainerStarted","Data":"3207bddceca56fd2aad0e5f8e6690da11a44b26fd3b2da123e60405c81c7ef1d"} Jan 31 15:25:44 crc kubenswrapper[4763]: I0131 15:25:44.165299 4763 generic.go:334] "Generic (PLEG): container finished" podID="cb8791e5-8470-4f7e-bf4b-ef5f031b179e" containerID="3c8a18008516cdc0f864653c98c804d9adb2d194b696ce78f82e1b1f416f0fdb" exitCode=0 Jan 31 15:25:44 crc kubenswrapper[4763]: I0131 15:25:44.165426 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mb7gx" event={"ID":"cb8791e5-8470-4f7e-bf4b-ef5f031b179e","Type":"ContainerDied","Data":"3c8a18008516cdc0f864653c98c804d9adb2d194b696ce78f82e1b1f416f0fdb"} Jan 31 15:25:44 crc kubenswrapper[4763]: I0131 15:25:44.169181 4763 generic.go:334] "Generic (PLEG): container finished" podID="cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74" containerID="3207bddceca56fd2aad0e5f8e6690da11a44b26fd3b2da123e60405c81c7ef1d" exitCode=0 Jan 31 15:25:44 crc kubenswrapper[4763]: I0131 15:25:44.169225 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5fm2z" event={"ID":"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74","Type":"ContainerDied","Data":"3207bddceca56fd2aad0e5f8e6690da11a44b26fd3b2da123e60405c81c7ef1d"} Jan 31 15:25:45 crc kubenswrapper[4763]: I0131 15:25:45.178502 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5fm2z" event={"ID":"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74","Type":"ContainerStarted","Data":"73efebac0af0a165f8d16dc2593528a2c5a81698f1864355b25ce33bcded4b04"} Jan 31 15:25:45 crc kubenswrapper[4763]: I0131 15:25:45.181016 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mb7gx" event={"ID":"cb8791e5-8470-4f7e-bf4b-ef5f031b179e","Type":"ContainerStarted","Data":"6b1f152a73b056090916e21805d313af388056d023a8025363ce7ee6fcaa6a7c"} Jan 31 15:25:45 crc kubenswrapper[4763]: I0131 15:25:45.213939 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5fm2z" podStartSLOduration=2.774084409 podStartE2EDuration="6.213922699s" podCreationTimestamp="2026-01-31 15:25:39 +0000 UTC" firstStartedPulling="2026-01-31 15:25:41.120773186 +0000 UTC m=+1860.875511489" lastFinishedPulling="2026-01-31 15:25:44.560611456 +0000 UTC m=+1864.315349779" observedRunningTime="2026-01-31 15:25:45.211028695 +0000 UTC m=+1864.965766998" watchObservedRunningTime="2026-01-31 15:25:45.213922699 +0000 UTC m=+1864.968660992" Jan 31 15:25:45 crc kubenswrapper[4763]: I0131 15:25:45.233338 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mb7gx" podStartSLOduration=1.751121996 podStartE2EDuration="4.23331862s" podCreationTimestamp="2026-01-31 15:25:41 +0000 UTC" firstStartedPulling="2026-01-31 15:25:42.132879946 +0000 UTC m=+1861.887618259" lastFinishedPulling="2026-01-31 15:25:44.61507658 +0000 UTC m=+1864.369814883" observedRunningTime="2026-01-31 15:25:45.230345664 +0000 UTC m=+1864.985083967" watchObservedRunningTime="2026-01-31 15:25:45.23331862 +0000 UTC m=+1864.988056923" Jan 31 15:25:49 crc kubenswrapper[4763]: I0131 15:25:49.691411 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5fm2z" Jan 31 15:25:49 crc kubenswrapper[4763]: I0131 15:25:49.691796 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5fm2z" Jan 31 15:25:49 crc kubenswrapper[4763]: I0131 15:25:49.773184 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5fm2z" Jan 31 15:25:50 crc kubenswrapper[4763]: I0131 15:25:50.303242 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5fm2z" Jan 31 15:25:50 crc kubenswrapper[4763]: I0131 15:25:50.761097 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5fm2z"] Jan 31 15:25:51 crc kubenswrapper[4763]: I0131 15:25:51.501385 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mb7gx" Jan 31 15:25:51 crc kubenswrapper[4763]: I0131 15:25:51.501615 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mb7gx" Jan 31 15:25:51 crc kubenswrapper[4763]: I0131 15:25:51.574909 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mb7gx" Jan 31 15:25:52 crc kubenswrapper[4763]: I0131 15:25:52.245921 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5fm2z" podUID="cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74" containerName="registry-server" containerID="cri-o://73efebac0af0a165f8d16dc2593528a2c5a81698f1864355b25ce33bcded4b04" gracePeriod=2 Jan 31 15:25:52 crc kubenswrapper[4763]: I0131 15:25:52.299218 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mb7gx" Jan 31 15:25:52 crc kubenswrapper[4763]: I0131 15:25:52.606268 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5fm2z" Jan 31 15:25:52 crc kubenswrapper[4763]: I0131 15:25:52.790893 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74-utilities\") pod \"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74\" (UID: \"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74\") " Jan 31 15:25:52 crc kubenswrapper[4763]: I0131 15:25:52.790939 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtmxm\" (UniqueName: \"kubernetes.io/projected/cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74-kube-api-access-dtmxm\") pod \"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74\" (UID: \"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74\") " Jan 31 15:25:52 crc kubenswrapper[4763]: I0131 15:25:52.790971 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74-catalog-content\") pod \"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74\" (UID: \"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74\") " Jan 31 15:25:52 crc kubenswrapper[4763]: I0131 15:25:52.792016 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74-utilities" (OuterVolumeSpecName: "utilities") pod "cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74" (UID: "cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:25:52 crc kubenswrapper[4763]: I0131 15:25:52.801066 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74-kube-api-access-dtmxm" (OuterVolumeSpecName: "kube-api-access-dtmxm") pod "cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74" (UID: "cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74"). InnerVolumeSpecName "kube-api-access-dtmxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:25:52 crc kubenswrapper[4763]: I0131 15:25:52.846546 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74" (UID: "cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:25:52 crc kubenswrapper[4763]: I0131 15:25:52.892393 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:25:52 crc kubenswrapper[4763]: I0131 15:25:52.892434 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtmxm\" (UniqueName: \"kubernetes.io/projected/cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74-kube-api-access-dtmxm\") on node \"crc\" DevicePath \"\"" Jan 31 15:25:52 crc kubenswrapper[4763]: I0131 15:25:52.892449 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:25:53 crc kubenswrapper[4763]: I0131 15:25:53.255549 4763 generic.go:334] "Generic (PLEG): container finished" podID="cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74" containerID="73efebac0af0a165f8d16dc2593528a2c5a81698f1864355b25ce33bcded4b04" exitCode=0 Jan 31 15:25:53 crc kubenswrapper[4763]: I0131 15:25:53.255627 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5fm2z" event={"ID":"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74","Type":"ContainerDied","Data":"73efebac0af0a165f8d16dc2593528a2c5a81698f1864355b25ce33bcded4b04"} Jan 31 15:25:53 crc kubenswrapper[4763]: I0131 15:25:53.255993 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5fm2z" event={"ID":"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74","Type":"ContainerDied","Data":"071f2fd9f37cb5bf2b59bf23debbd381506e38b2b6d8f101edfb4c67a327bfe5"} Jan 31 15:25:53 crc kubenswrapper[4763]: I0131 15:25:53.255684 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5fm2z" Jan 31 15:25:53 crc kubenswrapper[4763]: I0131 15:25:53.256053 4763 scope.go:117] "RemoveContainer" containerID="73efebac0af0a165f8d16dc2593528a2c5a81698f1864355b25ce33bcded4b04" Jan 31 15:25:53 crc kubenswrapper[4763]: I0131 15:25:53.282641 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5fm2z"] Jan 31 15:25:53 crc kubenswrapper[4763]: I0131 15:25:53.283314 4763 scope.go:117] "RemoveContainer" containerID="3207bddceca56fd2aad0e5f8e6690da11a44b26fd3b2da123e60405c81c7ef1d" Jan 31 15:25:53 crc kubenswrapper[4763]: I0131 15:25:53.295879 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5fm2z"] Jan 31 15:25:53 crc kubenswrapper[4763]: I0131 15:25:53.311586 4763 scope.go:117] "RemoveContainer" containerID="4bb8a75f0306dd74eceb1adaa955bf5bd177901f0843665d95f8845cd830fa7f" Jan 31 15:25:53 crc kubenswrapper[4763]: I0131 15:25:53.329158 4763 scope.go:117] "RemoveContainer" containerID="73efebac0af0a165f8d16dc2593528a2c5a81698f1864355b25ce33bcded4b04" Jan 31 15:25:53 crc kubenswrapper[4763]: E0131 15:25:53.329603 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73efebac0af0a165f8d16dc2593528a2c5a81698f1864355b25ce33bcded4b04\": container with ID starting with 73efebac0af0a165f8d16dc2593528a2c5a81698f1864355b25ce33bcded4b04 not found: ID does not exist" containerID="73efebac0af0a165f8d16dc2593528a2c5a81698f1864355b25ce33bcded4b04" Jan 31 15:25:53 crc kubenswrapper[4763]: I0131 15:25:53.329644 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73efebac0af0a165f8d16dc2593528a2c5a81698f1864355b25ce33bcded4b04"} err="failed to get container status \"73efebac0af0a165f8d16dc2593528a2c5a81698f1864355b25ce33bcded4b04\": rpc error: code = NotFound desc = could not find container \"73efebac0af0a165f8d16dc2593528a2c5a81698f1864355b25ce33bcded4b04\": container with ID starting with 73efebac0af0a165f8d16dc2593528a2c5a81698f1864355b25ce33bcded4b04 not found: ID does not exist" Jan 31 15:25:53 crc kubenswrapper[4763]: I0131 15:25:53.329676 4763 scope.go:117] "RemoveContainer" containerID="3207bddceca56fd2aad0e5f8e6690da11a44b26fd3b2da123e60405c81c7ef1d" Jan 31 15:25:53 crc kubenswrapper[4763]: E0131 15:25:53.330053 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3207bddceca56fd2aad0e5f8e6690da11a44b26fd3b2da123e60405c81c7ef1d\": container with ID starting with 3207bddceca56fd2aad0e5f8e6690da11a44b26fd3b2da123e60405c81c7ef1d not found: ID does not exist" containerID="3207bddceca56fd2aad0e5f8e6690da11a44b26fd3b2da123e60405c81c7ef1d" Jan 31 15:25:53 crc kubenswrapper[4763]: I0131 15:25:53.330085 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3207bddceca56fd2aad0e5f8e6690da11a44b26fd3b2da123e60405c81c7ef1d"} err="failed to get container status \"3207bddceca56fd2aad0e5f8e6690da11a44b26fd3b2da123e60405c81c7ef1d\": rpc error: code = NotFound desc = could not find container \"3207bddceca56fd2aad0e5f8e6690da11a44b26fd3b2da123e60405c81c7ef1d\": container with ID starting with 3207bddceca56fd2aad0e5f8e6690da11a44b26fd3b2da123e60405c81c7ef1d not found: ID does not exist" Jan 31 15:25:53 crc kubenswrapper[4763]: I0131 15:25:53.330109 4763 scope.go:117] "RemoveContainer" containerID="4bb8a75f0306dd74eceb1adaa955bf5bd177901f0843665d95f8845cd830fa7f" Jan 31 15:25:53 crc kubenswrapper[4763]: E0131 15:25:53.330400 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bb8a75f0306dd74eceb1adaa955bf5bd177901f0843665d95f8845cd830fa7f\": container with ID starting with 4bb8a75f0306dd74eceb1adaa955bf5bd177901f0843665d95f8845cd830fa7f not found: ID does not exist" containerID="4bb8a75f0306dd74eceb1adaa955bf5bd177901f0843665d95f8845cd830fa7f" Jan 31 15:25:53 crc kubenswrapper[4763]: I0131 15:25:53.330430 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bb8a75f0306dd74eceb1adaa955bf5bd177901f0843665d95f8845cd830fa7f"} err="failed to get container status \"4bb8a75f0306dd74eceb1adaa955bf5bd177901f0843665d95f8845cd830fa7f\": rpc error: code = NotFound desc = could not find container \"4bb8a75f0306dd74eceb1adaa955bf5bd177901f0843665d95f8845cd830fa7f\": container with ID starting with 4bb8a75f0306dd74eceb1adaa955bf5bd177901f0843665d95f8845cd830fa7f not found: ID does not exist" Jan 31 15:25:53 crc kubenswrapper[4763]: I0131 15:25:53.564717 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mb7gx"] Jan 31 15:25:55 crc kubenswrapper[4763]: I0131 15:25:55.075555 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74" path="/var/lib/kubelet/pods/cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74/volumes" Jan 31 15:25:55 crc kubenswrapper[4763]: I0131 15:25:55.272688 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mb7gx" podUID="cb8791e5-8470-4f7e-bf4b-ef5f031b179e" containerName="registry-server" containerID="cri-o://6b1f152a73b056090916e21805d313af388056d023a8025363ce7ee6fcaa6a7c" gracePeriod=2 Jan 31 15:25:55 crc kubenswrapper[4763]: I0131 15:25:55.727619 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mb7gx" Jan 31 15:25:55 crc kubenswrapper[4763]: I0131 15:25:55.842493 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb8791e5-8470-4f7e-bf4b-ef5f031b179e-catalog-content\") pod \"cb8791e5-8470-4f7e-bf4b-ef5f031b179e\" (UID: \"cb8791e5-8470-4f7e-bf4b-ef5f031b179e\") " Jan 31 15:25:55 crc kubenswrapper[4763]: I0131 15:25:55.842755 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cz8t\" (UniqueName: \"kubernetes.io/projected/cb8791e5-8470-4f7e-bf4b-ef5f031b179e-kube-api-access-8cz8t\") pod \"cb8791e5-8470-4f7e-bf4b-ef5f031b179e\" (UID: \"cb8791e5-8470-4f7e-bf4b-ef5f031b179e\") " Jan 31 15:25:55 crc kubenswrapper[4763]: I0131 15:25:55.842814 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb8791e5-8470-4f7e-bf4b-ef5f031b179e-utilities\") pod \"cb8791e5-8470-4f7e-bf4b-ef5f031b179e\" (UID: \"cb8791e5-8470-4f7e-bf4b-ef5f031b179e\") " Jan 31 15:25:55 crc kubenswrapper[4763]: I0131 15:25:55.843735 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb8791e5-8470-4f7e-bf4b-ef5f031b179e-utilities" (OuterVolumeSpecName: "utilities") pod "cb8791e5-8470-4f7e-bf4b-ef5f031b179e" (UID: "cb8791e5-8470-4f7e-bf4b-ef5f031b179e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:25:55 crc kubenswrapper[4763]: I0131 15:25:55.851037 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb8791e5-8470-4f7e-bf4b-ef5f031b179e-kube-api-access-8cz8t" (OuterVolumeSpecName: "kube-api-access-8cz8t") pod "cb8791e5-8470-4f7e-bf4b-ef5f031b179e" (UID: "cb8791e5-8470-4f7e-bf4b-ef5f031b179e"). InnerVolumeSpecName "kube-api-access-8cz8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:25:55 crc kubenswrapper[4763]: I0131 15:25:55.887683 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb8791e5-8470-4f7e-bf4b-ef5f031b179e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb8791e5-8470-4f7e-bf4b-ef5f031b179e" (UID: "cb8791e5-8470-4f7e-bf4b-ef5f031b179e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:25:55 crc kubenswrapper[4763]: I0131 15:25:55.944842 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb8791e5-8470-4f7e-bf4b-ef5f031b179e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:25:55 crc kubenswrapper[4763]: I0131 15:25:55.944884 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cz8t\" (UniqueName: \"kubernetes.io/projected/cb8791e5-8470-4f7e-bf4b-ef5f031b179e-kube-api-access-8cz8t\") on node \"crc\" DevicePath \"\"" Jan 31 15:25:55 crc kubenswrapper[4763]: I0131 15:25:55.944898 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb8791e5-8470-4f7e-bf4b-ef5f031b179e-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:25:56 crc kubenswrapper[4763]: I0131 15:25:56.285736 4763 generic.go:334] "Generic (PLEG): container finished" podID="cb8791e5-8470-4f7e-bf4b-ef5f031b179e" containerID="6b1f152a73b056090916e21805d313af388056d023a8025363ce7ee6fcaa6a7c" exitCode=0 Jan 31 15:25:56 crc kubenswrapper[4763]: I0131 15:25:56.285780 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mb7gx" Jan 31 15:25:56 crc kubenswrapper[4763]: I0131 15:25:56.285798 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mb7gx" event={"ID":"cb8791e5-8470-4f7e-bf4b-ef5f031b179e","Type":"ContainerDied","Data":"6b1f152a73b056090916e21805d313af388056d023a8025363ce7ee6fcaa6a7c"} Jan 31 15:25:56 crc kubenswrapper[4763]: I0131 15:25:56.285867 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mb7gx" event={"ID":"cb8791e5-8470-4f7e-bf4b-ef5f031b179e","Type":"ContainerDied","Data":"cff51285f5c8618d9bb64cc7ac41f7d2aa2e96174e4b0d0fdf2766d3cd566f91"} Jan 31 15:25:56 crc kubenswrapper[4763]: I0131 15:25:56.285898 4763 scope.go:117] "RemoveContainer" containerID="6b1f152a73b056090916e21805d313af388056d023a8025363ce7ee6fcaa6a7c" Jan 31 15:25:56 crc kubenswrapper[4763]: I0131 15:25:56.304085 4763 scope.go:117] "RemoveContainer" containerID="3c8a18008516cdc0f864653c98c804d9adb2d194b696ce78f82e1b1f416f0fdb" Jan 31 15:25:56 crc kubenswrapper[4763]: I0131 15:25:56.324880 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mb7gx"] Jan 31 15:25:56 crc kubenswrapper[4763]: I0131 15:25:56.334554 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mb7gx"] Jan 31 15:25:56 crc kubenswrapper[4763]: I0131 15:25:56.351316 4763 scope.go:117] "RemoveContainer" containerID="821a71c5343122b8b36f7588bf54af973ed519fa93bc250474d5bcb59d1f699d" Jan 31 15:25:56 crc kubenswrapper[4763]: I0131 15:25:56.385777 4763 scope.go:117] "RemoveContainer" containerID="6b1f152a73b056090916e21805d313af388056d023a8025363ce7ee6fcaa6a7c" Jan 31 15:25:56 crc kubenswrapper[4763]: E0131 15:25:56.386289 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b1f152a73b056090916e21805d313af388056d023a8025363ce7ee6fcaa6a7c\": container with ID starting with 6b1f152a73b056090916e21805d313af388056d023a8025363ce7ee6fcaa6a7c not found: ID does not exist" containerID="6b1f152a73b056090916e21805d313af388056d023a8025363ce7ee6fcaa6a7c" Jan 31 15:25:56 crc kubenswrapper[4763]: I0131 15:25:56.386354 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b1f152a73b056090916e21805d313af388056d023a8025363ce7ee6fcaa6a7c"} err="failed to get container status \"6b1f152a73b056090916e21805d313af388056d023a8025363ce7ee6fcaa6a7c\": rpc error: code = NotFound desc = could not find container \"6b1f152a73b056090916e21805d313af388056d023a8025363ce7ee6fcaa6a7c\": container with ID starting with 6b1f152a73b056090916e21805d313af388056d023a8025363ce7ee6fcaa6a7c not found: ID does not exist" Jan 31 15:25:56 crc kubenswrapper[4763]: I0131 15:25:56.386387 4763 scope.go:117] "RemoveContainer" containerID="3c8a18008516cdc0f864653c98c804d9adb2d194b696ce78f82e1b1f416f0fdb" Jan 31 15:25:56 crc kubenswrapper[4763]: E0131 15:25:56.386729 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c8a18008516cdc0f864653c98c804d9adb2d194b696ce78f82e1b1f416f0fdb\": container with ID starting with 3c8a18008516cdc0f864653c98c804d9adb2d194b696ce78f82e1b1f416f0fdb not found: ID does not exist" containerID="3c8a18008516cdc0f864653c98c804d9adb2d194b696ce78f82e1b1f416f0fdb" Jan 31 15:25:56 crc kubenswrapper[4763]: I0131 15:25:56.386764 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c8a18008516cdc0f864653c98c804d9adb2d194b696ce78f82e1b1f416f0fdb"} err="failed to get container status \"3c8a18008516cdc0f864653c98c804d9adb2d194b696ce78f82e1b1f416f0fdb\": rpc error: code = NotFound desc = could not find container \"3c8a18008516cdc0f864653c98c804d9adb2d194b696ce78f82e1b1f416f0fdb\": container with ID starting with 3c8a18008516cdc0f864653c98c804d9adb2d194b696ce78f82e1b1f416f0fdb not found: ID does not exist" Jan 31 15:25:56 crc kubenswrapper[4763]: I0131 15:25:56.386786 4763 scope.go:117] "RemoveContainer" containerID="821a71c5343122b8b36f7588bf54af973ed519fa93bc250474d5bcb59d1f699d" Jan 31 15:25:56 crc kubenswrapper[4763]: E0131 15:25:56.387001 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"821a71c5343122b8b36f7588bf54af973ed519fa93bc250474d5bcb59d1f699d\": container with ID starting with 821a71c5343122b8b36f7588bf54af973ed519fa93bc250474d5bcb59d1f699d not found: ID does not exist" containerID="821a71c5343122b8b36f7588bf54af973ed519fa93bc250474d5bcb59d1f699d" Jan 31 15:25:56 crc kubenswrapper[4763]: I0131 15:25:56.387028 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"821a71c5343122b8b36f7588bf54af973ed519fa93bc250474d5bcb59d1f699d"} err="failed to get container status \"821a71c5343122b8b36f7588bf54af973ed519fa93bc250474d5bcb59d1f699d\": rpc error: code = NotFound desc = could not find container \"821a71c5343122b8b36f7588bf54af973ed519fa93bc250474d5bcb59d1f699d\": container with ID starting with 821a71c5343122b8b36f7588bf54af973ed519fa93bc250474d5bcb59d1f699d not found: ID does not exist" Jan 31 15:25:57 crc kubenswrapper[4763]: I0131 15:25:57.056988 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb8791e5-8470-4f7e-bf4b-ef5f031b179e" path="/var/lib/kubelet/pods/cb8791e5-8470-4f7e-bf4b-ef5f031b179e/volumes" Jan 31 15:27:14 crc kubenswrapper[4763]: I0131 15:27:14.177481 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:27:14 crc kubenswrapper[4763]: I0131 15:27:14.178126 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:27:15 crc kubenswrapper[4763]: I0131 15:27:15.958679 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:27:15 crc kubenswrapper[4763]: E0131 15:27:15.959659 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context canceled" pod="swift-kuttl-tests/swift-storage-0" podUID="7390eb43-2a86-4ad9-b504-6fca814daf1c" Jan 31 15:27:15 crc kubenswrapper[4763]: I0131 15:27:15.999857 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:16 crc kubenswrapper[4763]: I0131 15:27:16.008865 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:16 crc kubenswrapper[4763]: I0131 15:27:16.049817 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvzrw\" (UniqueName: \"kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-kube-api-access-bvzrw\") pod \"7390eb43-2a86-4ad9-b504-6fca814daf1c\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " Jan 31 15:27:16 crc kubenswrapper[4763]: I0131 15:27:16.049917 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7390eb43-2a86-4ad9-b504-6fca814daf1c-cache\") pod \"7390eb43-2a86-4ad9-b504-6fca814daf1c\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " Jan 31 15:27:16 crc kubenswrapper[4763]: I0131 15:27:16.049994 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7390eb43-2a86-4ad9-b504-6fca814daf1c-lock\") pod \"7390eb43-2a86-4ad9-b504-6fca814daf1c\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " Jan 31 15:27:16 crc kubenswrapper[4763]: I0131 15:27:16.050080 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"7390eb43-2a86-4ad9-b504-6fca814daf1c\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " Jan 31 15:27:16 crc kubenswrapper[4763]: I0131 15:27:16.050252 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7390eb43-2a86-4ad9-b504-6fca814daf1c-cache" (OuterVolumeSpecName: "cache") pod "7390eb43-2a86-4ad9-b504-6fca814daf1c" (UID: "7390eb43-2a86-4ad9-b504-6fca814daf1c"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:27:16 crc kubenswrapper[4763]: I0131 15:27:16.050347 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7390eb43-2a86-4ad9-b504-6fca814daf1c-lock" (OuterVolumeSpecName: "lock") pod "7390eb43-2a86-4ad9-b504-6fca814daf1c" (UID: "7390eb43-2a86-4ad9-b504-6fca814daf1c"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:27:16 crc kubenswrapper[4763]: I0131 15:27:16.050435 4763 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7390eb43-2a86-4ad9-b504-6fca814daf1c-cache\") on node \"crc\" DevicePath \"\"" Jan 31 15:27:16 crc kubenswrapper[4763]: I0131 15:27:16.050453 4763 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7390eb43-2a86-4ad9-b504-6fca814daf1c-lock\") on node \"crc\" DevicePath \"\"" Jan 31 15:27:16 crc kubenswrapper[4763]: I0131 15:27:16.058094 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "swift") pod "7390eb43-2a86-4ad9-b504-6fca814daf1c" (UID: "7390eb43-2a86-4ad9-b504-6fca814daf1c"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:27:16 crc kubenswrapper[4763]: I0131 15:27:16.064948 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-kube-api-access-bvzrw" (OuterVolumeSpecName: "kube-api-access-bvzrw") pod "7390eb43-2a86-4ad9-b504-6fca814daf1c" (UID: "7390eb43-2a86-4ad9-b504-6fca814daf1c"). InnerVolumeSpecName "kube-api-access-bvzrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:27:16 crc kubenswrapper[4763]: I0131 15:27:16.151504 4763 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 31 15:27:16 crc kubenswrapper[4763]: I0131 15:27:16.151542 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvzrw\" (UniqueName: \"kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-kube-api-access-bvzrw\") on node \"crc\" DevicePath \"\"" Jan 31 15:27:16 crc kubenswrapper[4763]: I0131 15:27:16.162624 4763 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 31 15:27:16 crc kubenswrapper[4763]: I0131 15:27:16.252560 4763 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.005874 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.068891 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.079102 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.124184 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:27:17 crc kubenswrapper[4763]: E0131 15:27:17.125033 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb8791e5-8470-4f7e-bf4b-ef5f031b179e" containerName="registry-server" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.125058 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb8791e5-8470-4f7e-bf4b-ef5f031b179e" containerName="registry-server" Jan 31 15:27:17 crc kubenswrapper[4763]: E0131 15:27:17.125076 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74" containerName="extract-utilities" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.125083 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74" containerName="extract-utilities" Jan 31 15:27:17 crc kubenswrapper[4763]: E0131 15:27:17.125098 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74" containerName="registry-server" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.125104 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74" containerName="registry-server" Jan 31 15:27:17 crc kubenswrapper[4763]: E0131 15:27:17.125111 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74" containerName="extract-content" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.125116 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74" containerName="extract-content" Jan 31 15:27:17 crc kubenswrapper[4763]: E0131 15:27:17.125130 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb8791e5-8470-4f7e-bf4b-ef5f031b179e" containerName="extract-utilities" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.125135 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb8791e5-8470-4f7e-bf4b-ef5f031b179e" containerName="extract-utilities" Jan 31 15:27:17 crc kubenswrapper[4763]: E0131 15:27:17.125146 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb8791e5-8470-4f7e-bf4b-ef5f031b179e" containerName="extract-content" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.125151 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb8791e5-8470-4f7e-bf4b-ef5f031b179e" containerName="extract-content" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.125282 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74" containerName="registry-server" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.125291 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb8791e5-8470-4f7e-bf4b-ef5f031b179e" containerName="registry-server" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.128940 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.132780 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"combined-ca-bundle" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.132944 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-storage-config-data" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.141735 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.165432 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-lock\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.165532 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-cache\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.165750 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.165817 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.165877 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb2lm\" (UniqueName: \"kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-kube-api-access-sb2lm\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.165952 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.166018 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.267042 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: E0131 15:27:17.267181 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:27:17 crc kubenswrapper[4763]: E0131 15:27:17.267194 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:27:17 crc kubenswrapper[4763]: E0131 15:27:17.267239 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift podName:b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96 nodeName:}" failed. No retries permitted until 2026-01-31 15:27:17.767223852 +0000 UTC m=+1957.521962145 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift") pod "swift-storage-0" (UID: "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96") : configmap "swift-ring-files" not found Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.267444 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-lock\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.267474 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-cache\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.267912 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-lock\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.268034 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.268137 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-cache\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.268071 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.268195 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb2lm\" (UniqueName: \"kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-kube-api-access-sb2lm\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.268156 4763 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") device mount path \"/mnt/openstack/pv05\"" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.284313 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.291992 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.293453 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb2lm\" (UniqueName: \"kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-kube-api-access-sb2lm\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.688188 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj"] Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.690150 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.693090 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"cert-swift-internal-svc" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.694479 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"cert-swift-public-svc" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.709953 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj"] Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.776441 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrw7k\" (UniqueName: \"kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-kube-api-access-zrw7k\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.776503 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-internal-tls-certs\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.776575 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-combined-ca-bundle\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.776628 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.776655 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28a15191-c413-47d3-bf30-00cfea074db4-log-httpd\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.776769 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-public-tls-certs\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.776797 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-config-data\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.776816 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: E0131 15:27:17.776866 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:27:17 crc kubenswrapper[4763]: E0131 15:27:17.776899 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.776914 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28a15191-c413-47d3-bf30-00cfea074db4-run-httpd\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: E0131 15:27:17.776962 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift podName:b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96 nodeName:}" failed. No retries permitted until 2026-01-31 15:27:18.776937958 +0000 UTC m=+1958.531676261 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift") pod "swift-storage-0" (UID: "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96") : configmap "swift-ring-files" not found Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.877969 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-config-data\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.878037 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.878064 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-public-tls-certs\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.878129 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28a15191-c413-47d3-bf30-00cfea074db4-run-httpd\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.878158 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrw7k\" (UniqueName: \"kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-kube-api-access-zrw7k\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.878195 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-internal-tls-certs\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.878255 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-combined-ca-bundle\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.878313 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28a15191-c413-47d3-bf30-00cfea074db4-log-httpd\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.879092 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28a15191-c413-47d3-bf30-00cfea074db4-run-httpd\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.879141 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28a15191-c413-47d3-bf30-00cfea074db4-log-httpd\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: E0131 15:27:17.879308 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:27:17 crc kubenswrapper[4763]: E0131 15:27:17.879351 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj: configmap "swift-ring-files" not found Jan 31 15:27:17 crc kubenswrapper[4763]: E0131 15:27:17.879472 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift podName:28a15191-c413-47d3-bf30-00cfea074db4 nodeName:}" failed. No retries permitted until 2026-01-31 15:27:18.379443672 +0000 UTC m=+1958.134182025 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift") pod "swift-proxy-5c474fc7f4-tg6pj" (UID: "28a15191-c413-47d3-bf30-00cfea074db4") : configmap "swift-ring-files" not found Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.882091 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-combined-ca-bundle\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.885200 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-config-data\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.886499 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-public-tls-certs\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.886760 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-internal-tls-certs\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.920654 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrw7k\" (UniqueName: \"kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-kube-api-access-zrw7k\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:18 crc kubenswrapper[4763]: I0131 15:27:18.386084 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:18 crc kubenswrapper[4763]: E0131 15:27:18.386351 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:27:18 crc kubenswrapper[4763]: E0131 15:27:18.386405 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj: configmap "swift-ring-files" not found Jan 31 15:27:18 crc kubenswrapper[4763]: E0131 15:27:18.386527 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift podName:28a15191-c413-47d3-bf30-00cfea074db4 nodeName:}" failed. No retries permitted until 2026-01-31 15:27:19.386491445 +0000 UTC m=+1959.141229788 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift") pod "swift-proxy-5c474fc7f4-tg6pj" (UID: "28a15191-c413-47d3-bf30-00cfea074db4") : configmap "swift-ring-files" not found Jan 31 15:27:18 crc kubenswrapper[4763]: I0131 15:27:18.792770 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:18 crc kubenswrapper[4763]: E0131 15:27:18.793138 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:27:18 crc kubenswrapper[4763]: E0131 15:27:18.793203 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:27:18 crc kubenswrapper[4763]: E0131 15:27:18.793314 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift podName:b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96 nodeName:}" failed. No retries permitted until 2026-01-31 15:27:20.793279896 +0000 UTC m=+1960.548018229 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift") pod "swift-storage-0" (UID: "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96") : configmap "swift-ring-files" not found Jan 31 15:27:19 crc kubenswrapper[4763]: I0131 15:27:19.056486 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7390eb43-2a86-4ad9-b504-6fca814daf1c" path="/var/lib/kubelet/pods/7390eb43-2a86-4ad9-b504-6fca814daf1c/volumes" Jan 31 15:27:19 crc kubenswrapper[4763]: I0131 15:27:19.403977 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:19 crc kubenswrapper[4763]: E0131 15:27:19.404127 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:27:19 crc kubenswrapper[4763]: E0131 15:27:19.404384 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj: configmap "swift-ring-files" not found Jan 31 15:27:19 crc kubenswrapper[4763]: E0131 15:27:19.404476 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift podName:28a15191-c413-47d3-bf30-00cfea074db4 nodeName:}" failed. No retries permitted until 2026-01-31 15:27:21.404445537 +0000 UTC m=+1961.159183870 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift") pod "swift-proxy-5c474fc7f4-tg6pj" (UID: "28a15191-c413-47d3-bf30-00cfea074db4") : configmap "swift-ring-files" not found Jan 31 15:27:20 crc kubenswrapper[4763]: I0131 15:27:20.826140 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:20 crc kubenswrapper[4763]: E0131 15:27:20.826469 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:27:20 crc kubenswrapper[4763]: E0131 15:27:20.826511 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:27:20 crc kubenswrapper[4763]: E0131 15:27:20.826609 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift podName:b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96 nodeName:}" failed. No retries permitted until 2026-01-31 15:27:24.826576148 +0000 UTC m=+1964.581314481 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift") pod "swift-storage-0" (UID: "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96") : configmap "swift-ring-files" not found Jan 31 15:27:21 crc kubenswrapper[4763]: I0131 15:27:21.442580 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:21 crc kubenswrapper[4763]: E0131 15:27:21.442784 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:27:21 crc kubenswrapper[4763]: E0131 15:27:21.442814 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj: configmap "swift-ring-files" not found Jan 31 15:27:21 crc kubenswrapper[4763]: E0131 15:27:21.442877 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift podName:28a15191-c413-47d3-bf30-00cfea074db4 nodeName:}" failed. No retries permitted until 2026-01-31 15:27:25.442860137 +0000 UTC m=+1965.197598430 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift") pod "swift-proxy-5c474fc7f4-tg6pj" (UID: "28a15191-c413-47d3-bf30-00cfea074db4") : configmap "swift-ring-files" not found Jan 31 15:27:24 crc kubenswrapper[4763]: I0131 15:27:24.891811 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:24 crc kubenswrapper[4763]: E0131 15:27:24.892074 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:27:24 crc kubenswrapper[4763]: E0131 15:27:24.892092 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:27:24 crc kubenswrapper[4763]: E0131 15:27:24.892150 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift podName:b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96 nodeName:}" failed. No retries permitted until 2026-01-31 15:27:32.892130215 +0000 UTC m=+1972.646868508 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift") pod "swift-storage-0" (UID: "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96") : configmap "swift-ring-files" not found Jan 31 15:27:25 crc kubenswrapper[4763]: I0131 15:27:25.504003 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:25 crc kubenswrapper[4763]: E0131 15:27:25.504248 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:27:25 crc kubenswrapper[4763]: E0131 15:27:25.504382 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj: configmap "swift-ring-files" not found Jan 31 15:27:25 crc kubenswrapper[4763]: E0131 15:27:25.504453 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift podName:28a15191-c413-47d3-bf30-00cfea074db4 nodeName:}" failed. No retries permitted until 2026-01-31 15:27:33.504433048 +0000 UTC m=+1973.259171361 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift") pod "swift-proxy-5c474fc7f4-tg6pj" (UID: "28a15191-c413-47d3-bf30-00cfea074db4") : configmap "swift-ring-files" not found Jan 31 15:27:29 crc kubenswrapper[4763]: E0131 15:27:29.992671 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" podUID="c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc" Jan 31 15:27:30 crc kubenswrapper[4763]: I0131 15:27:30.120375 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:27:30 crc kubenswrapper[4763]: I0131 15:27:30.688288 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:27:30 crc kubenswrapper[4763]: E0131 15:27:30.688532 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:27:30 crc kubenswrapper[4763]: E0131 15:27:30.688570 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-77c98d654c-29n65: configmap "swift-ring-files" not found Jan 31 15:27:30 crc kubenswrapper[4763]: E0131 15:27:30.688661 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift podName:c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc nodeName:}" failed. No retries permitted until 2026-01-31 15:29:32.688631372 +0000 UTC m=+2092.443369695 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift") pod "swift-proxy-77c98d654c-29n65" (UID: "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc") : configmap "swift-ring-files" not found Jan 31 15:27:32 crc kubenswrapper[4763]: I0131 15:27:32.931929 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:32 crc kubenswrapper[4763]: E0131 15:27:32.932168 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:27:32 crc kubenswrapper[4763]: E0131 15:27:32.932427 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:27:32 crc kubenswrapper[4763]: E0131 15:27:32.932510 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift podName:b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96 nodeName:}" failed. No retries permitted until 2026-01-31 15:27:48.932485053 +0000 UTC m=+1988.687223386 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift") pod "swift-storage-0" (UID: "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96") : configmap "swift-ring-files" not found Jan 31 15:27:33 crc kubenswrapper[4763]: I0131 15:27:33.544210 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:33 crc kubenswrapper[4763]: E0131 15:27:33.544476 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:27:33 crc kubenswrapper[4763]: E0131 15:27:33.544534 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj: configmap "swift-ring-files" not found Jan 31 15:27:33 crc kubenswrapper[4763]: E0131 15:27:33.544640 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift podName:28a15191-c413-47d3-bf30-00cfea074db4 nodeName:}" failed. No retries permitted until 2026-01-31 15:27:49.54461218 +0000 UTC m=+1989.299350513 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift") pod "swift-proxy-5c474fc7f4-tg6pj" (UID: "28a15191-c413-47d3-bf30-00cfea074db4") : configmap "swift-ring-files" not found Jan 31 15:27:44 crc kubenswrapper[4763]: I0131 15:27:44.177977 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:27:44 crc kubenswrapper[4763]: I0131 15:27:44.178936 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:27:49 crc kubenswrapper[4763]: I0131 15:27:49.005887 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:49 crc kubenswrapper[4763]: E0131 15:27:49.006253 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:27:49 crc kubenswrapper[4763]: E0131 15:27:49.006763 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:27:49 crc kubenswrapper[4763]: E0131 15:27:49.006868 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift podName:b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96 nodeName:}" failed. No retries permitted until 2026-01-31 15:28:21.006836896 +0000 UTC m=+2020.761575229 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift") pod "swift-storage-0" (UID: "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96") : configmap "swift-ring-files" not found Jan 31 15:27:49 crc kubenswrapper[4763]: I0131 15:27:49.616300 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:49 crc kubenswrapper[4763]: E0131 15:27:49.616561 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:27:49 crc kubenswrapper[4763]: E0131 15:27:49.616650 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj: configmap "swift-ring-files" not found Jan 31 15:27:49 crc kubenswrapper[4763]: E0131 15:27:49.616831 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift podName:28a15191-c413-47d3-bf30-00cfea074db4 nodeName:}" failed. No retries permitted until 2026-01-31 15:28:21.616790585 +0000 UTC m=+2021.371528918 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift") pod "swift-proxy-5c474fc7f4-tg6pj" (UID: "28a15191-c413-47d3-bf30-00cfea074db4") : configmap "swift-ring-files" not found Jan 31 15:28:14 crc kubenswrapper[4763]: I0131 15:28:14.177502 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:28:14 crc kubenswrapper[4763]: I0131 15:28:14.178266 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:28:14 crc kubenswrapper[4763]: I0131 15:28:14.178341 4763 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 15:28:14 crc kubenswrapper[4763]: I0131 15:28:14.179479 4763 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2bfabe5df54337c4196a35b7009ba75d2a72e0f1ccf13fd0226987ae1c4b7c11"} pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 15:28:14 crc kubenswrapper[4763]: I0131 15:28:14.179591 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" containerID="cri-o://2bfabe5df54337c4196a35b7009ba75d2a72e0f1ccf13fd0226987ae1c4b7c11" gracePeriod=600 Jan 31 15:28:15 crc kubenswrapper[4763]: I0131 15:28:15.368715 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" podUID="d73a5142-56cf-4676-a6f1-a00868938c4d" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 15:28:16 crc kubenswrapper[4763]: I0131 15:28:16.988228 4763 generic.go:334] "Generic (PLEG): container finished" podID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerID="2bfabe5df54337c4196a35b7009ba75d2a72e0f1ccf13fd0226987ae1c4b7c11" exitCode=0 Jan 31 15:28:16 crc kubenswrapper[4763]: I0131 15:28:16.988598 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerDied","Data":"2bfabe5df54337c4196a35b7009ba75d2a72e0f1ccf13fd0226987ae1c4b7c11"} Jan 31 15:28:16 crc kubenswrapper[4763]: I0131 15:28:16.988629 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerStarted","Data":"53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb"} Jan 31 15:28:16 crc kubenswrapper[4763]: I0131 15:28:16.988649 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:28:21 crc kubenswrapper[4763]: I0131 15:28:21.048259 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:28:21 crc kubenswrapper[4763]: E0131 15:28:21.048413 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:28:21 crc kubenswrapper[4763]: E0131 15:28:21.048674 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:28:21 crc kubenswrapper[4763]: E0131 15:28:21.048754 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift podName:b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96 nodeName:}" failed. No retries permitted until 2026-01-31 15:29:25.048734561 +0000 UTC m=+2084.803472854 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift") pod "swift-storage-0" (UID: "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96") : configmap "swift-ring-files" not found Jan 31 15:28:21 crc kubenswrapper[4763]: I0131 15:28:21.655950 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:28:21 crc kubenswrapper[4763]: E0131 15:28:21.656210 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:28:21 crc kubenswrapper[4763]: E0131 15:28:21.656233 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj: configmap "swift-ring-files" not found Jan 31 15:28:21 crc kubenswrapper[4763]: E0131 15:28:21.656330 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift podName:28a15191-c413-47d3-bf30-00cfea074db4 nodeName:}" failed. No retries permitted until 2026-01-31 15:29:25.656303575 +0000 UTC m=+2085.411041908 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift") pod "swift-proxy-5c474fc7f4-tg6pj" (UID: "28a15191-c413-47d3-bf30-00cfea074db4") : configmap "swift-ring-files" not found Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.634819 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-9rk9x"] Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.638863 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.642016 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.642442 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.669867 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-9rk9x"] Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.779222 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/52163549-6ae3-46bc-87bc-6f909ee6f511-swiftconf\") pod \"swift-ring-rebalance-9rk9x\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.779278 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/52163549-6ae3-46bc-87bc-6f909ee6f511-etc-swift\") pod \"swift-ring-rebalance-9rk9x\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.779319 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/52163549-6ae3-46bc-87bc-6f909ee6f511-dispersionconf\") pod \"swift-ring-rebalance-9rk9x\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.779372 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/52163549-6ae3-46bc-87bc-6f909ee6f511-ring-data-devices\") pod \"swift-ring-rebalance-9rk9x\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.779417 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j785\" (UniqueName: \"kubernetes.io/projected/52163549-6ae3-46bc-87bc-6f909ee6f511-kube-api-access-4j785\") pod \"swift-ring-rebalance-9rk9x\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.779454 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52163549-6ae3-46bc-87bc-6f909ee6f511-scripts\") pod \"swift-ring-rebalance-9rk9x\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.880949 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52163549-6ae3-46bc-87bc-6f909ee6f511-scripts\") pod \"swift-ring-rebalance-9rk9x\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.881201 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/52163549-6ae3-46bc-87bc-6f909ee6f511-swiftconf\") pod \"swift-ring-rebalance-9rk9x\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.881274 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/52163549-6ae3-46bc-87bc-6f909ee6f511-etc-swift\") pod \"swift-ring-rebalance-9rk9x\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.881349 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/52163549-6ae3-46bc-87bc-6f909ee6f511-dispersionconf\") pod \"swift-ring-rebalance-9rk9x\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.881463 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/52163549-6ae3-46bc-87bc-6f909ee6f511-ring-data-devices\") pod \"swift-ring-rebalance-9rk9x\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.881554 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j785\" (UniqueName: \"kubernetes.io/projected/52163549-6ae3-46bc-87bc-6f909ee6f511-kube-api-access-4j785\") pod \"swift-ring-rebalance-9rk9x\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.882595 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/52163549-6ae3-46bc-87bc-6f909ee6f511-etc-swift\") pod \"swift-ring-rebalance-9rk9x\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.882679 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52163549-6ae3-46bc-87bc-6f909ee6f511-scripts\") pod \"swift-ring-rebalance-9rk9x\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.883929 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/52163549-6ae3-46bc-87bc-6f909ee6f511-ring-data-devices\") pod \"swift-ring-rebalance-9rk9x\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.890673 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/52163549-6ae3-46bc-87bc-6f909ee6f511-swiftconf\") pod \"swift-ring-rebalance-9rk9x\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.891123 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/52163549-6ae3-46bc-87bc-6f909ee6f511-dispersionconf\") pod \"swift-ring-rebalance-9rk9x\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.910258 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j785\" (UniqueName: \"kubernetes.io/projected/52163549-6ae3-46bc-87bc-6f909ee6f511-kube-api-access-4j785\") pod \"swift-ring-rebalance-9rk9x\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.978661 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-swift-dockercfg-ngwjv" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.986961 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:17 crc kubenswrapper[4763]: I0131 15:29:17.401163 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-9rk9x"] Jan 31 15:29:17 crc kubenswrapper[4763]: I0131 15:29:17.472554 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" event={"ID":"52163549-6ae3-46bc-87bc-6f909ee6f511","Type":"ContainerStarted","Data":"d468a84447e352b3d831b84757002f0c6e26c89712b6a917b436c446766e4d6d"} Jan 31 15:29:18 crc kubenswrapper[4763]: I0131 15:29:18.482859 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" event={"ID":"52163549-6ae3-46bc-87bc-6f909ee6f511","Type":"ContainerStarted","Data":"6c1accfe18801fff7cebc38d02887e93a257691b6f17a553032f19d001556184"} Jan 31 15:29:18 crc kubenswrapper[4763]: I0131 15:29:18.502291 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" podStartSLOduration=2.502265998 podStartE2EDuration="2.502265998s" podCreationTimestamp="2026-01-31 15:29:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:29:18.499506953 +0000 UTC m=+2078.254245246" watchObservedRunningTime="2026-01-31 15:29:18.502265998 +0000 UTC m=+2078.257004321" Jan 31 15:29:20 crc kubenswrapper[4763]: E0131 15:29:20.153240 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-storage-0" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" Jan 31 15:29:20 crc kubenswrapper[4763]: I0131 15:29:20.497915 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:29:20 crc kubenswrapper[4763]: E0131 15:29:20.715016 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" podUID="28a15191-c413-47d3-bf30-00cfea074db4" Jan 31 15:29:21 crc kubenswrapper[4763]: I0131 15:29:21.504850 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:29:24 crc kubenswrapper[4763]: I0131 15:29:24.530061 4763 generic.go:334] "Generic (PLEG): container finished" podID="52163549-6ae3-46bc-87bc-6f909ee6f511" containerID="6c1accfe18801fff7cebc38d02887e93a257691b6f17a553032f19d001556184" exitCode=0 Jan 31 15:29:24 crc kubenswrapper[4763]: I0131 15:29:24.530146 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" event={"ID":"52163549-6ae3-46bc-87bc-6f909ee6f511","Type":"ContainerDied","Data":"6c1accfe18801fff7cebc38d02887e93a257691b6f17a553032f19d001556184"} Jan 31 15:29:25 crc kubenswrapper[4763]: I0131 15:29:25.123731 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:29:25 crc kubenswrapper[4763]: I0131 15:29:25.141150 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:29:25 crc kubenswrapper[4763]: I0131 15:29:25.299429 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:29:25 crc kubenswrapper[4763]: I0131 15:29:25.733803 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:29:25 crc kubenswrapper[4763]: I0131 15:29:25.740549 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:29:25 crc kubenswrapper[4763]: I0131 15:29:25.773471 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:29:25 crc kubenswrapper[4763]: I0131 15:29:25.856007 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:25 crc kubenswrapper[4763]: I0131 15:29:25.938635 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52163549-6ae3-46bc-87bc-6f909ee6f511-scripts\") pod \"52163549-6ae3-46bc-87bc-6f909ee6f511\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " Jan 31 15:29:25 crc kubenswrapper[4763]: I0131 15:29:25.939083 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4j785\" (UniqueName: \"kubernetes.io/projected/52163549-6ae3-46bc-87bc-6f909ee6f511-kube-api-access-4j785\") pod \"52163549-6ae3-46bc-87bc-6f909ee6f511\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " Jan 31 15:29:25 crc kubenswrapper[4763]: I0131 15:29:25.939133 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/52163549-6ae3-46bc-87bc-6f909ee6f511-etc-swift\") pod \"52163549-6ae3-46bc-87bc-6f909ee6f511\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " Jan 31 15:29:25 crc kubenswrapper[4763]: I0131 15:29:25.939156 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/52163549-6ae3-46bc-87bc-6f909ee6f511-swiftconf\") pod \"52163549-6ae3-46bc-87bc-6f909ee6f511\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " Jan 31 15:29:25 crc kubenswrapper[4763]: I0131 15:29:25.939175 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/52163549-6ae3-46bc-87bc-6f909ee6f511-ring-data-devices\") pod \"52163549-6ae3-46bc-87bc-6f909ee6f511\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " Jan 31 15:29:25 crc kubenswrapper[4763]: I0131 15:29:25.939218 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/52163549-6ae3-46bc-87bc-6f909ee6f511-dispersionconf\") pod \"52163549-6ae3-46bc-87bc-6f909ee6f511\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " Jan 31 15:29:25 crc kubenswrapper[4763]: I0131 15:29:25.940243 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52163549-6ae3-46bc-87bc-6f909ee6f511-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "52163549-6ae3-46bc-87bc-6f909ee6f511" (UID: "52163549-6ae3-46bc-87bc-6f909ee6f511"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:29:25 crc kubenswrapper[4763]: I0131 15:29:25.940345 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52163549-6ae3-46bc-87bc-6f909ee6f511-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "52163549-6ae3-46bc-87bc-6f909ee6f511" (UID: "52163549-6ae3-46bc-87bc-6f909ee6f511"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:29:25 crc kubenswrapper[4763]: I0131 15:29:25.946997 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52163549-6ae3-46bc-87bc-6f909ee6f511-kube-api-access-4j785" (OuterVolumeSpecName: "kube-api-access-4j785") pod "52163549-6ae3-46bc-87bc-6f909ee6f511" (UID: "52163549-6ae3-46bc-87bc-6f909ee6f511"). InnerVolumeSpecName "kube-api-access-4j785". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:29:25 crc kubenswrapper[4763]: I0131 15:29:25.958332 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52163549-6ae3-46bc-87bc-6f909ee6f511-scripts" (OuterVolumeSpecName: "scripts") pod "52163549-6ae3-46bc-87bc-6f909ee6f511" (UID: "52163549-6ae3-46bc-87bc-6f909ee6f511"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:29:25 crc kubenswrapper[4763]: I0131 15:29:25.958604 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52163549-6ae3-46bc-87bc-6f909ee6f511-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "52163549-6ae3-46bc-87bc-6f909ee6f511" (UID: "52163549-6ae3-46bc-87bc-6f909ee6f511"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:29:25 crc kubenswrapper[4763]: I0131 15:29:25.958762 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52163549-6ae3-46bc-87bc-6f909ee6f511-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "52163549-6ae3-46bc-87bc-6f909ee6f511" (UID: "52163549-6ae3-46bc-87bc-6f909ee6f511"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.006317 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.041293 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/52163549-6ae3-46bc-87bc-6f909ee6f511-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.041333 4763 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/52163549-6ae3-46bc-87bc-6f909ee6f511-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.041344 4763 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/52163549-6ae3-46bc-87bc-6f909ee6f511-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.041356 4763 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/52163549-6ae3-46bc-87bc-6f909ee6f511-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.041552 4763 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52163549-6ae3-46bc-87bc-6f909ee6f511-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.041573 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4j785\" (UniqueName: \"kubernetes.io/projected/52163549-6ae3-46bc-87bc-6f909ee6f511-kube-api-access-4j785\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.245226 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj"] Jan 31 15:29:26 crc kubenswrapper[4763]: W0131 15:29:26.252875 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28a15191_c413_47d3_bf30_00cfea074db4.slice/crio-5be2d540c55e3fd7015dc71d6c2acdb224aa3a06b89724aa15fe1f287322b4fe WatchSource:0}: Error finding container 5be2d540c55e3fd7015dc71d6c2acdb224aa3a06b89724aa15fe1f287322b4fe: Status 404 returned error can't find the container with id 5be2d540c55e3fd7015dc71d6c2acdb224aa3a06b89724aa15fe1f287322b4fe Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.548306 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerStarted","Data":"c6776da1a16e11de5e0299bf5a94671ceb2fc44db56762423b817128616ad972"} Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.548374 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerStarted","Data":"27af2de829fbb2ed8d9f250d2e6bfbc1c3ba8467ddb49414f8464eb2d5f6f4ee"} Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.548402 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerStarted","Data":"b7a53ccb6ba4646a1c596b8440d55d97505667eefcbaff4f73ec11d24991e295"} Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.548423 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerStarted","Data":"08b94966ac20bf1259e091c08cf195b1a8050f7facc3b6925f12ae4f6758d415"} Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.550900 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" event={"ID":"52163549-6ae3-46bc-87bc-6f909ee6f511","Type":"ContainerDied","Data":"d468a84447e352b3d831b84757002f0c6e26c89712b6a917b436c446766e4d6d"} Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.550949 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.550955 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d468a84447e352b3d831b84757002f0c6e26c89712b6a917b436c446766e4d6d" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.552516 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" event={"ID":"28a15191-c413-47d3-bf30-00cfea074db4","Type":"ContainerStarted","Data":"94d6f7a85414be8a5e46bae79d955e31d2f5e10b15b2ca8ff97bde5b77d90037"} Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.552563 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" event={"ID":"28a15191-c413-47d3-bf30-00cfea074db4","Type":"ContainerStarted","Data":"5be2d540c55e3fd7015dc71d6c2acdb224aa3a06b89724aa15fe1f287322b4fe"} Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.639632 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-9rk9x"] Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.649005 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-9rk9x"] Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.660519 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-hh8jm"] Jan 31 15:29:26 crc kubenswrapper[4763]: E0131 15:29:26.660888 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52163549-6ae3-46bc-87bc-6f909ee6f511" containerName="swift-ring-rebalance" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.660912 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="52163549-6ae3-46bc-87bc-6f909ee6f511" containerName="swift-ring-rebalance" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.661096 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="52163549-6ae3-46bc-87bc-6f909ee6f511" containerName="swift-ring-rebalance" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.661727 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.666499 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-hh8jm"] Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.673566 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.673597 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.763210 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bb93fd42-19ef-474d-9a43-5f77f863f4f3-dispersionconf\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.763248 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb93fd42-19ef-474d-9a43-5f77f863f4f3-combined-ca-bundle\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.763286 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb93fd42-19ef-474d-9a43-5f77f863f4f3-scripts\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.763314 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpg4s\" (UniqueName: \"kubernetes.io/projected/bb93fd42-19ef-474d-9a43-5f77f863f4f3-kube-api-access-tpg4s\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.763349 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bb93fd42-19ef-474d-9a43-5f77f863f4f3-swiftconf\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.763375 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bb93fd42-19ef-474d-9a43-5f77f863f4f3-etc-swift\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.763429 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bb93fd42-19ef-474d-9a43-5f77f863f4f3-ring-data-devices\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.865211 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb93fd42-19ef-474d-9a43-5f77f863f4f3-scripts\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.865796 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpg4s\" (UniqueName: \"kubernetes.io/projected/bb93fd42-19ef-474d-9a43-5f77f863f4f3-kube-api-access-tpg4s\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.865929 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bb93fd42-19ef-474d-9a43-5f77f863f4f3-swiftconf\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.866015 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bb93fd42-19ef-474d-9a43-5f77f863f4f3-etc-swift\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.866137 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bb93fd42-19ef-474d-9a43-5f77f863f4f3-ring-data-devices\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.866287 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bb93fd42-19ef-474d-9a43-5f77f863f4f3-dispersionconf\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.866338 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb93fd42-19ef-474d-9a43-5f77f863f4f3-combined-ca-bundle\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.866740 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bb93fd42-19ef-474d-9a43-5f77f863f4f3-etc-swift\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.867074 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bb93fd42-19ef-474d-9a43-5f77f863f4f3-ring-data-devices\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.867518 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb93fd42-19ef-474d-9a43-5f77f863f4f3-scripts\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.870751 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bb93fd42-19ef-474d-9a43-5f77f863f4f3-dispersionconf\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.871658 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb93fd42-19ef-474d-9a43-5f77f863f4f3-combined-ca-bundle\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.879369 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bb93fd42-19ef-474d-9a43-5f77f863f4f3-swiftconf\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.884944 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpg4s\" (UniqueName: \"kubernetes.io/projected/bb93fd42-19ef-474d-9a43-5f77f863f4f3-kube-api-access-tpg4s\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.991557 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.050834 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52163549-6ae3-46bc-87bc-6f909ee6f511" path="/var/lib/kubelet/pods/52163549-6ae3-46bc-87bc-6f909ee6f511/volumes" Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.230973 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-hh8jm"] Jan 31 15:29:27 crc kubenswrapper[4763]: W0131 15:29:27.239477 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb93fd42_19ef_474d_9a43_5f77f863f4f3.slice/crio-fb42295730d17767575eebb14b287990ae5d6c880a545500d832c4d6ef96ac26 WatchSource:0}: Error finding container fb42295730d17767575eebb14b287990ae5d6c880a545500d832c4d6ef96ac26: Status 404 returned error can't find the container with id fb42295730d17767575eebb14b287990ae5d6c880a545500d832c4d6ef96ac26 Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.563279 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerStarted","Data":"5cc3982d159833064b5c1cdf1058f4d310c549bc30d9583666bb8c50d8deb789"} Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.563555 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerStarted","Data":"3334bd67954ed15598914f9f4fbef205014fe259154cc379e034e929afe48ba6"} Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.563634 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerStarted","Data":"3faecda72dcb03b6c1659c54cec9556dc478a9f14238f362f4c1b75ec2935478"} Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.563722 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerStarted","Data":"4ff6b19267726798e5bafd41f33c57adc7cb799816fce89a0072cb78d8920082"} Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.563793 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerStarted","Data":"8a5f0dcb0aa81eaac0bb8540fa03cbb7e7d3594bfb5d9f0ba94b71a916c7c113"} Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.565514 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" event={"ID":"28a15191-c413-47d3-bf30-00cfea074db4","Type":"ContainerStarted","Data":"1524702ffca715bd0db7eccd119fb193efb8441cda7e530f3c83b895f51ec885"} Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.565926 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.566060 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.567656 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" event={"ID":"bb93fd42-19ef-474d-9a43-5f77f863f4f3","Type":"ContainerStarted","Data":"2bda736f39ac7738f7e5a8baee5357943f15ea2730a9ddbd6eb3bf5d9f6ff0ff"} Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.567762 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" event={"ID":"bb93fd42-19ef-474d-9a43-5f77f863f4f3","Type":"ContainerStarted","Data":"fb42295730d17767575eebb14b287990ae5d6c880a545500d832c4d6ef96ac26"} Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.609315 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" podStartSLOduration=1.609300641 podStartE2EDuration="1.609300641s" podCreationTimestamp="2026-01-31 15:29:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:29:27.608979142 +0000 UTC m=+2087.363717435" watchObservedRunningTime="2026-01-31 15:29:27.609300641 +0000 UTC m=+2087.364038924" Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.611218 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" podStartSLOduration=130.611211293 podStartE2EDuration="2m10.611211293s" podCreationTimestamp="2026-01-31 15:27:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:29:27.590872574 +0000 UTC m=+2087.345610867" watchObservedRunningTime="2026-01-31 15:29:27.611211293 +0000 UTC m=+2087.365949576" Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.757170 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zznq2"] Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.758435 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zznq2" Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.779455 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zznq2"] Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.884017 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw9k4\" (UniqueName: \"kubernetes.io/projected/f4ae6311-229c-409c-9ef7-42c47c4d009f-kube-api-access-jw9k4\") pod \"redhat-marketplace-zznq2\" (UID: \"f4ae6311-229c-409c-9ef7-42c47c4d009f\") " pod="openshift-marketplace/redhat-marketplace-zznq2" Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.884102 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4ae6311-229c-409c-9ef7-42c47c4d009f-utilities\") pod \"redhat-marketplace-zznq2\" (UID: \"f4ae6311-229c-409c-9ef7-42c47c4d009f\") " pod="openshift-marketplace/redhat-marketplace-zznq2" Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.884155 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4ae6311-229c-409c-9ef7-42c47c4d009f-catalog-content\") pod \"redhat-marketplace-zznq2\" (UID: \"f4ae6311-229c-409c-9ef7-42c47c4d009f\") " pod="openshift-marketplace/redhat-marketplace-zznq2" Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.985293 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw9k4\" (UniqueName: \"kubernetes.io/projected/f4ae6311-229c-409c-9ef7-42c47c4d009f-kube-api-access-jw9k4\") pod \"redhat-marketplace-zznq2\" (UID: \"f4ae6311-229c-409c-9ef7-42c47c4d009f\") " pod="openshift-marketplace/redhat-marketplace-zznq2" Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.985376 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4ae6311-229c-409c-9ef7-42c47c4d009f-utilities\") pod \"redhat-marketplace-zznq2\" (UID: \"f4ae6311-229c-409c-9ef7-42c47c4d009f\") " pod="openshift-marketplace/redhat-marketplace-zznq2" Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.985421 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4ae6311-229c-409c-9ef7-42c47c4d009f-catalog-content\") pod \"redhat-marketplace-zznq2\" (UID: \"f4ae6311-229c-409c-9ef7-42c47c4d009f\") " pod="openshift-marketplace/redhat-marketplace-zznq2" Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.985855 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4ae6311-229c-409c-9ef7-42c47c4d009f-catalog-content\") pod \"redhat-marketplace-zznq2\" (UID: \"f4ae6311-229c-409c-9ef7-42c47c4d009f\") " pod="openshift-marketplace/redhat-marketplace-zznq2" Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.986077 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4ae6311-229c-409c-9ef7-42c47c4d009f-utilities\") pod \"redhat-marketplace-zznq2\" (UID: \"f4ae6311-229c-409c-9ef7-42c47c4d009f\") " pod="openshift-marketplace/redhat-marketplace-zznq2" Jan 31 15:29:28 crc kubenswrapper[4763]: I0131 15:29:28.003565 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw9k4\" (UniqueName: \"kubernetes.io/projected/f4ae6311-229c-409c-9ef7-42c47c4d009f-kube-api-access-jw9k4\") pod \"redhat-marketplace-zznq2\" (UID: \"f4ae6311-229c-409c-9ef7-42c47c4d009f\") " pod="openshift-marketplace/redhat-marketplace-zznq2" Jan 31 15:29:28 crc kubenswrapper[4763]: I0131 15:29:28.076151 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zznq2" Jan 31 15:29:28 crc kubenswrapper[4763]: I0131 15:29:28.599662 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerStarted","Data":"4ce013d55c9bbe30b3a876684f92e64239ef5b5ebc001cea9bdff39c5628c354"} Jan 31 15:29:28 crc kubenswrapper[4763]: I0131 15:29:28.599916 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerStarted","Data":"13645e6ffc4344b9af2d1ceda74f8853a0dba80f284062e6d075c13aefd2c584"} Jan 31 15:29:28 crc kubenswrapper[4763]: I0131 15:29:28.599929 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerStarted","Data":"f7a6591e71b1676140102a34e2d10a5e2e57e3288fef923f32e0ff238738e50b"} Jan 31 15:29:28 crc kubenswrapper[4763]: I0131 15:29:28.599939 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerStarted","Data":"75623ddf20f8fda7b57c75c03bc9e8bada9ac4dad0d734f2ba1d8790ab36d2c1"} Jan 31 15:29:28 crc kubenswrapper[4763]: I0131 15:29:28.599949 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerStarted","Data":"d475c3ef66bf23eca234aaaa87dccd531be71d1b54e59d9e34a453e486d2922d"} Jan 31 15:29:28 crc kubenswrapper[4763]: I0131 15:29:28.599958 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerStarted","Data":"3c538f45263d816240971928fab851f939c124cdc4848caf9f7b76f74cad7462"} Jan 31 15:29:28 crc kubenswrapper[4763]: I0131 15:29:28.628255 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zznq2"] Jan 31 15:29:29 crc kubenswrapper[4763]: I0131 15:29:29.620131 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerStarted","Data":"b0b563b544c427ad8b7a14d7adac600e852739031807938b6bfb905e2e57f1b5"} Jan 31 15:29:29 crc kubenswrapper[4763]: I0131 15:29:29.627643 4763 generic.go:334] "Generic (PLEG): container finished" podID="f4ae6311-229c-409c-9ef7-42c47c4d009f" containerID="789655d3a53368d5a5cb6cfafe1d691c90ec8cdff2c02a3f6f0d8d2c2021826d" exitCode=0 Jan 31 15:29:29 crc kubenswrapper[4763]: I0131 15:29:29.627731 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zznq2" event={"ID":"f4ae6311-229c-409c-9ef7-42c47c4d009f","Type":"ContainerDied","Data":"789655d3a53368d5a5cb6cfafe1d691c90ec8cdff2c02a3f6f0d8d2c2021826d"} Jan 31 15:29:29 crc kubenswrapper[4763]: I0131 15:29:29.627812 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zznq2" event={"ID":"f4ae6311-229c-409c-9ef7-42c47c4d009f","Type":"ContainerStarted","Data":"200089e5c6c719127b84991a83acfdb070ea832ba2581a5989e580c35ecbd770"} Jan 31 15:29:29 crc kubenswrapper[4763]: I0131 15:29:29.653351 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-0" podStartSLOduration=132.653335943 podStartE2EDuration="2m12.653335943s" podCreationTimestamp="2026-01-31 15:27:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:29:29.648905783 +0000 UTC m=+2089.403644076" watchObservedRunningTime="2026-01-31 15:29:29.653335943 +0000 UTC m=+2089.408074236" Jan 31 15:29:30 crc kubenswrapper[4763]: I0131 15:29:30.640244 4763 generic.go:334] "Generic (PLEG): container finished" podID="f4ae6311-229c-409c-9ef7-42c47c4d009f" containerID="b73bd3f195ed1616bd190447e3ea1f637ad869c5bba2cb9216db2c389f8d9bc7" exitCode=0 Jan 31 15:29:30 crc kubenswrapper[4763]: I0131 15:29:30.640346 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zznq2" event={"ID":"f4ae6311-229c-409c-9ef7-42c47c4d009f","Type":"ContainerDied","Data":"b73bd3f195ed1616bd190447e3ea1f637ad869c5bba2cb9216db2c389f8d9bc7"} Jan 31 15:29:31 crc kubenswrapper[4763]: I0131 15:29:31.022964 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:29:31 crc kubenswrapper[4763]: I0131 15:29:31.052186 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:29:31 crc kubenswrapper[4763]: I0131 15:29:31.129470 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-77c98d654c-29n65"] Jan 31 15:29:31 crc kubenswrapper[4763]: E0131 15:29:31.131636 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context canceled" pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" podUID="c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc" Jan 31 15:29:31 crc kubenswrapper[4763]: I0131 15:29:31.649656 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:29:31 crc kubenswrapper[4763]: I0131 15:29:31.649663 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zznq2" event={"ID":"f4ae6311-229c-409c-9ef7-42c47c4d009f","Type":"ContainerStarted","Data":"3ca5b36f4e58e9485c8e082bd015e8867a1a68d6a5c37c51474b44ec17a46219"} Jan 31 15:29:31 crc kubenswrapper[4763]: I0131 15:29:31.658490 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:29:31 crc kubenswrapper[4763]: I0131 15:29:31.743342 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-run-httpd\") pod \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " Jan 31 15:29:31 crc kubenswrapper[4763]: I0131 15:29:31.743437 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-config-data\") pod \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " Jan 31 15:29:31 crc kubenswrapper[4763]: I0131 15:29:31.743498 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-log-httpd\") pod \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " Jan 31 15:29:31 crc kubenswrapper[4763]: I0131 15:29:31.743539 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fssq6\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-kube-api-access-fssq6\") pod \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " Jan 31 15:29:31 crc kubenswrapper[4763]: I0131 15:29:31.744359 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc" (UID: "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:29:31 crc kubenswrapper[4763]: I0131 15:29:31.745403 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc" (UID: "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:29:31 crc kubenswrapper[4763]: I0131 15:29:31.745549 4763 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:31 crc kubenswrapper[4763]: I0131 15:29:31.749896 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-kube-api-access-fssq6" (OuterVolumeSpecName: "kube-api-access-fssq6") pod "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc" (UID: "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc"). InnerVolumeSpecName "kube-api-access-fssq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:29:31 crc kubenswrapper[4763]: I0131 15:29:31.755849 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-config-data" (OuterVolumeSpecName: "config-data") pod "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc" (UID: "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:29:31 crc kubenswrapper[4763]: I0131 15:29:31.846706 4763 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:31 crc kubenswrapper[4763]: I0131 15:29:31.847038 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fssq6\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-kube-api-access-fssq6\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:31 crc kubenswrapper[4763]: I0131 15:29:31.847055 4763 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:32 crc kubenswrapper[4763]: I0131 15:29:32.656108 4763 generic.go:334] "Generic (PLEG): container finished" podID="bb93fd42-19ef-474d-9a43-5f77f863f4f3" containerID="2bda736f39ac7738f7e5a8baee5357943f15ea2730a9ddbd6eb3bf5d9f6ff0ff" exitCode=0 Jan 31 15:29:32 crc kubenswrapper[4763]: I0131 15:29:32.656185 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" event={"ID":"bb93fd42-19ef-474d-9a43-5f77f863f4f3","Type":"ContainerDied","Data":"2bda736f39ac7738f7e5a8baee5357943f15ea2730a9ddbd6eb3bf5d9f6ff0ff"} Jan 31 15:29:32 crc kubenswrapper[4763]: I0131 15:29:32.656257 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:29:32 crc kubenswrapper[4763]: I0131 15:29:32.675861 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zznq2" podStartSLOduration=4.213732692 podStartE2EDuration="5.675844232s" podCreationTimestamp="2026-01-31 15:29:27 +0000 UTC" firstStartedPulling="2026-01-31 15:29:29.629538471 +0000 UTC m=+2089.384276764" lastFinishedPulling="2026-01-31 15:29:31.091650001 +0000 UTC m=+2090.846388304" observedRunningTime="2026-01-31 15:29:31.676082051 +0000 UTC m=+2091.430820344" watchObservedRunningTime="2026-01-31 15:29:32.675844232 +0000 UTC m=+2092.430582525" Jan 31 15:29:32 crc kubenswrapper[4763]: I0131 15:29:32.701717 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-77c98d654c-29n65"] Jan 31 15:29:32 crc kubenswrapper[4763]: I0131 15:29:32.714957 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-proxy-77c98d654c-29n65"] Jan 31 15:29:32 crc kubenswrapper[4763]: I0131 15:29:32.760985 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:33 crc kubenswrapper[4763]: I0131 15:29:33.051118 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc" path="/var/lib/kubelet/pods/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc/volumes" Jan 31 15:29:33 crc kubenswrapper[4763]: I0131 15:29:33.909320 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:33 crc kubenswrapper[4763]: I0131 15:29:33.978191 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bb93fd42-19ef-474d-9a43-5f77f863f4f3-ring-data-devices\") pod \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " Jan 31 15:29:33 crc kubenswrapper[4763]: I0131 15:29:33.978270 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bb93fd42-19ef-474d-9a43-5f77f863f4f3-dispersionconf\") pod \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " Jan 31 15:29:33 crc kubenswrapper[4763]: I0131 15:29:33.978305 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb93fd42-19ef-474d-9a43-5f77f863f4f3-scripts\") pod \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " Jan 31 15:29:33 crc kubenswrapper[4763]: I0131 15:29:33.978344 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpg4s\" (UniqueName: \"kubernetes.io/projected/bb93fd42-19ef-474d-9a43-5f77f863f4f3-kube-api-access-tpg4s\") pod \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " Jan 31 15:29:33 crc kubenswrapper[4763]: I0131 15:29:33.978391 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb93fd42-19ef-474d-9a43-5f77f863f4f3-combined-ca-bundle\") pod \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " Jan 31 15:29:33 crc kubenswrapper[4763]: I0131 15:29:33.978460 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bb93fd42-19ef-474d-9a43-5f77f863f4f3-etc-swift\") pod \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " Jan 31 15:29:33 crc kubenswrapper[4763]: I0131 15:29:33.978527 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bb93fd42-19ef-474d-9a43-5f77f863f4f3-swiftconf\") pod \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " Jan 31 15:29:33 crc kubenswrapper[4763]: I0131 15:29:33.979324 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb93fd42-19ef-474d-9a43-5f77f863f4f3-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "bb93fd42-19ef-474d-9a43-5f77f863f4f3" (UID: "bb93fd42-19ef-474d-9a43-5f77f863f4f3"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:29:33 crc kubenswrapper[4763]: I0131 15:29:33.979500 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb93fd42-19ef-474d-9a43-5f77f863f4f3-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "bb93fd42-19ef-474d-9a43-5f77f863f4f3" (UID: "bb93fd42-19ef-474d-9a43-5f77f863f4f3"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:29:33 crc kubenswrapper[4763]: I0131 15:29:33.984906 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb93fd42-19ef-474d-9a43-5f77f863f4f3-kube-api-access-tpg4s" (OuterVolumeSpecName: "kube-api-access-tpg4s") pod "bb93fd42-19ef-474d-9a43-5f77f863f4f3" (UID: "bb93fd42-19ef-474d-9a43-5f77f863f4f3"). InnerVolumeSpecName "kube-api-access-tpg4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:29:34 crc kubenswrapper[4763]: I0131 15:29:34.000328 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb93fd42-19ef-474d-9a43-5f77f863f4f3-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "bb93fd42-19ef-474d-9a43-5f77f863f4f3" (UID: "bb93fd42-19ef-474d-9a43-5f77f863f4f3"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:29:34 crc kubenswrapper[4763]: I0131 15:29:34.001087 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb93fd42-19ef-474d-9a43-5f77f863f4f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb93fd42-19ef-474d-9a43-5f77f863f4f3" (UID: "bb93fd42-19ef-474d-9a43-5f77f863f4f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:29:34 crc kubenswrapper[4763]: I0131 15:29:34.008394 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb93fd42-19ef-474d-9a43-5f77f863f4f3-scripts" (OuterVolumeSpecName: "scripts") pod "bb93fd42-19ef-474d-9a43-5f77f863f4f3" (UID: "bb93fd42-19ef-474d-9a43-5f77f863f4f3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:29:34 crc kubenswrapper[4763]: I0131 15:29:34.013588 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb93fd42-19ef-474d-9a43-5f77f863f4f3-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "bb93fd42-19ef-474d-9a43-5f77f863f4f3" (UID: "bb93fd42-19ef-474d-9a43-5f77f863f4f3"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:29:34 crc kubenswrapper[4763]: I0131 15:29:34.080346 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bb93fd42-19ef-474d-9a43-5f77f863f4f3-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:34 crc kubenswrapper[4763]: I0131 15:29:34.080379 4763 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bb93fd42-19ef-474d-9a43-5f77f863f4f3-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:34 crc kubenswrapper[4763]: I0131 15:29:34.080388 4763 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bb93fd42-19ef-474d-9a43-5f77f863f4f3-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:34 crc kubenswrapper[4763]: I0131 15:29:34.080399 4763 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bb93fd42-19ef-474d-9a43-5f77f863f4f3-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:34 crc kubenswrapper[4763]: I0131 15:29:34.080408 4763 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb93fd42-19ef-474d-9a43-5f77f863f4f3-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:34 crc kubenswrapper[4763]: I0131 15:29:34.080416 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpg4s\" (UniqueName: \"kubernetes.io/projected/bb93fd42-19ef-474d-9a43-5f77f863f4f3-kube-api-access-tpg4s\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:34 crc kubenswrapper[4763]: I0131 15:29:34.080424 4763 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb93fd42-19ef-474d-9a43-5f77f863f4f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:34 crc kubenswrapper[4763]: I0131 15:29:34.671759 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" event={"ID":"bb93fd42-19ef-474d-9a43-5f77f863f4f3","Type":"ContainerDied","Data":"fb42295730d17767575eebb14b287990ae5d6c880a545500d832c4d6ef96ac26"} Jan 31 15:29:34 crc kubenswrapper[4763]: I0131 15:29:34.672197 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb42295730d17767575eebb14b287990ae5d6c880a545500d832c4d6ef96ac26" Jan 31 15:29:34 crc kubenswrapper[4763]: I0131 15:29:34.671833 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.547466 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-hh8jm"] Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.591064 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.591650 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="account-server" containerID="cri-o://b7a53ccb6ba4646a1c596b8440d55d97505667eefcbaff4f73ec11d24991e295" gracePeriod=30 Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.591964 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="container-updater" containerID="cri-o://5cc3982d159833064b5c1cdf1058f4d310c549bc30d9583666bb8c50d8deb789" gracePeriod=30 Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.592147 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="swift-recon-cron" containerID="cri-o://b0b563b544c427ad8b7a14d7adac600e852739031807938b6bfb905e2e57f1b5" gracePeriod=30 Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.592129 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="container-auditor" containerID="cri-o://3334bd67954ed15598914f9f4fbef205014fe259154cc379e034e929afe48ba6" gracePeriod=30 Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.592200 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="rsync" containerID="cri-o://4ce013d55c9bbe30b3a876684f92e64239ef5b5ebc001cea9bdff39c5628c354" gracePeriod=30 Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.592239 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-expirer" containerID="cri-o://13645e6ffc4344b9af2d1ceda74f8853a0dba80f284062e6d075c13aefd2c584" gracePeriod=30 Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.592256 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="container-server" containerID="cri-o://4ff6b19267726798e5bafd41f33c57adc7cb799816fce89a0072cb78d8920082" gracePeriod=30 Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.592236 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="container-replicator" containerID="cri-o://3faecda72dcb03b6c1659c54cec9556dc478a9f14238f362f4c1b75ec2935478" gracePeriod=30 Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.592309 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-updater" containerID="cri-o://f7a6591e71b1676140102a34e2d10a5e2e57e3288fef923f32e0ff238738e50b" gracePeriod=30 Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.592337 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="account-reaper" containerID="cri-o://8a5f0dcb0aa81eaac0bb8540fa03cbb7e7d3594bfb5d9f0ba94b71a916c7c113" gracePeriod=30 Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.592346 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-auditor" containerID="cri-o://75623ddf20f8fda7b57c75c03bc9e8bada9ac4dad0d734f2ba1d8790ab36d2c1" gracePeriod=30 Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.592386 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-replicator" containerID="cri-o://d475c3ef66bf23eca234aaaa87dccd531be71d1b54e59d9e34a453e486d2922d" gracePeriod=30 Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.592400 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="account-auditor" containerID="cri-o://c6776da1a16e11de5e0299bf5a94671ceb2fc44db56762423b817128616ad972" gracePeriod=30 Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.592422 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-server" containerID="cri-o://3c538f45263d816240971928fab851f939c124cdc4848caf9f7b76f74cad7462" gracePeriod=30 Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.592457 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="account-replicator" containerID="cri-o://27af2de829fbb2ed8d9f250d2e6bfbc1c3ba8467ddb49414f8464eb2d5f6f4ee" gracePeriod=30 Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.611362 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-hh8jm"] Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.627026 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj"] Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.627257 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" podUID="28a15191-c413-47d3-bf30-00cfea074db4" containerName="proxy-httpd" containerID="cri-o://94d6f7a85414be8a5e46bae79d955e31d2f5e10b15b2ca8ff97bde5b77d90037" gracePeriod=30 Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.627380 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" podUID="28a15191-c413-47d3-bf30-00cfea074db4" containerName="proxy-server" containerID="cri-o://1524702ffca715bd0db7eccd119fb193efb8441cda7e530f3c83b895f51ec885" gracePeriod=30 Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.008081 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" podUID="28a15191-c413-47d3-bf30-00cfea074db4" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.129:8080/healthcheck\": dial tcp 10.217.0.129:8080: connect: connection refused" Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.008092 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" podUID="28a15191-c413-47d3-bf30-00cfea074db4" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.129:8080/healthcheck\": dial tcp 10.217.0.129:8080: connect: connection refused" Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.691725 4763 generic.go:334] "Generic (PLEG): container finished" podID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerID="4ce013d55c9bbe30b3a876684f92e64239ef5b5ebc001cea9bdff39c5628c354" exitCode=0 Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692053 4763 generic.go:334] "Generic (PLEG): container finished" podID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerID="13645e6ffc4344b9af2d1ceda74f8853a0dba80f284062e6d075c13aefd2c584" exitCode=0 Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692067 4763 generic.go:334] "Generic (PLEG): container finished" podID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerID="f7a6591e71b1676140102a34e2d10a5e2e57e3288fef923f32e0ff238738e50b" exitCode=0 Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692078 4763 generic.go:334] "Generic (PLEG): container finished" podID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerID="75623ddf20f8fda7b57c75c03bc9e8bada9ac4dad0d734f2ba1d8790ab36d2c1" exitCode=0 Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692087 4763 generic.go:334] "Generic (PLEG): container finished" podID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerID="d475c3ef66bf23eca234aaaa87dccd531be71d1b54e59d9e34a453e486d2922d" exitCode=0 Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692096 4763 generic.go:334] "Generic (PLEG): container finished" podID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerID="3c538f45263d816240971928fab851f939c124cdc4848caf9f7b76f74cad7462" exitCode=0 Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692105 4763 generic.go:334] "Generic (PLEG): container finished" podID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerID="5cc3982d159833064b5c1cdf1058f4d310c549bc30d9583666bb8c50d8deb789" exitCode=0 Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692115 4763 generic.go:334] "Generic (PLEG): container finished" podID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerID="3334bd67954ed15598914f9f4fbef205014fe259154cc379e034e929afe48ba6" exitCode=0 Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692124 4763 generic.go:334] "Generic (PLEG): container finished" podID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerID="3faecda72dcb03b6c1659c54cec9556dc478a9f14238f362f4c1b75ec2935478" exitCode=0 Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.691748 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerDied","Data":"4ce013d55c9bbe30b3a876684f92e64239ef5b5ebc001cea9bdff39c5628c354"} Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692185 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerDied","Data":"13645e6ffc4344b9af2d1ceda74f8853a0dba80f284062e6d075c13aefd2c584"} Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692212 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerDied","Data":"f7a6591e71b1676140102a34e2d10a5e2e57e3288fef923f32e0ff238738e50b"} Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692133 4763 generic.go:334] "Generic (PLEG): container finished" podID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerID="4ff6b19267726798e5bafd41f33c57adc7cb799816fce89a0072cb78d8920082" exitCode=0 Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692253 4763 generic.go:334] "Generic (PLEG): container finished" podID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerID="8a5f0dcb0aa81eaac0bb8540fa03cbb7e7d3594bfb5d9f0ba94b71a916c7c113" exitCode=0 Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692272 4763 generic.go:334] "Generic (PLEG): container finished" podID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerID="c6776da1a16e11de5e0299bf5a94671ceb2fc44db56762423b817128616ad972" exitCode=0 Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692284 4763 generic.go:334] "Generic (PLEG): container finished" podID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerID="27af2de829fbb2ed8d9f250d2e6bfbc1c3ba8467ddb49414f8464eb2d5f6f4ee" exitCode=0 Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692293 4763 generic.go:334] "Generic (PLEG): container finished" podID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerID="b7a53ccb6ba4646a1c596b8440d55d97505667eefcbaff4f73ec11d24991e295" exitCode=0 Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692224 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerDied","Data":"75623ddf20f8fda7b57c75c03bc9e8bada9ac4dad0d734f2ba1d8790ab36d2c1"} Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692344 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerDied","Data":"d475c3ef66bf23eca234aaaa87dccd531be71d1b54e59d9e34a453e486d2922d"} Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692367 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerDied","Data":"3c538f45263d816240971928fab851f939c124cdc4848caf9f7b76f74cad7462"} Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692381 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerDied","Data":"5cc3982d159833064b5c1cdf1058f4d310c549bc30d9583666bb8c50d8deb789"} Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692393 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerDied","Data":"3334bd67954ed15598914f9f4fbef205014fe259154cc379e034e929afe48ba6"} Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692403 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerDied","Data":"3faecda72dcb03b6c1659c54cec9556dc478a9f14238f362f4c1b75ec2935478"} Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692415 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerDied","Data":"4ff6b19267726798e5bafd41f33c57adc7cb799816fce89a0072cb78d8920082"} Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692426 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerDied","Data":"8a5f0dcb0aa81eaac0bb8540fa03cbb7e7d3594bfb5d9f0ba94b71a916c7c113"} Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692438 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerDied","Data":"c6776da1a16e11de5e0299bf5a94671ceb2fc44db56762423b817128616ad972"} Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692449 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerDied","Data":"27af2de829fbb2ed8d9f250d2e6bfbc1c3ba8467ddb49414f8464eb2d5f6f4ee"} Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692461 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerDied","Data":"b7a53ccb6ba4646a1c596b8440d55d97505667eefcbaff4f73ec11d24991e295"} Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.694198 4763 generic.go:334] "Generic (PLEG): container finished" podID="28a15191-c413-47d3-bf30-00cfea074db4" containerID="1524702ffca715bd0db7eccd119fb193efb8441cda7e530f3c83b895f51ec885" exitCode=0 Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.694220 4763 generic.go:334] "Generic (PLEG): container finished" podID="28a15191-c413-47d3-bf30-00cfea074db4" containerID="94d6f7a85414be8a5e46bae79d955e31d2f5e10b15b2ca8ff97bde5b77d90037" exitCode=0 Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.694242 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" event={"ID":"28a15191-c413-47d3-bf30-00cfea074db4","Type":"ContainerDied","Data":"1524702ffca715bd0db7eccd119fb193efb8441cda7e530f3c83b895f51ec885"} Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.694263 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" event={"ID":"28a15191-c413-47d3-bf30-00cfea074db4","Type":"ContainerDied","Data":"94d6f7a85414be8a5e46bae79d955e31d2f5e10b15b2ca8ff97bde5b77d90037"} Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.958337 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.028250 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-combined-ca-bundle\") pod \"28a15191-c413-47d3-bf30-00cfea074db4\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.028341 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-public-tls-certs\") pod \"28a15191-c413-47d3-bf30-00cfea074db4\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.028427 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-internal-tls-certs\") pod \"28a15191-c413-47d3-bf30-00cfea074db4\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.028457 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28a15191-c413-47d3-bf30-00cfea074db4-run-httpd\") pod \"28a15191-c413-47d3-bf30-00cfea074db4\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.028483 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift\") pod \"28a15191-c413-47d3-bf30-00cfea074db4\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.028513 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrw7k\" (UniqueName: \"kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-kube-api-access-zrw7k\") pod \"28a15191-c413-47d3-bf30-00cfea074db4\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.028560 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-config-data\") pod \"28a15191-c413-47d3-bf30-00cfea074db4\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.028611 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28a15191-c413-47d3-bf30-00cfea074db4-log-httpd\") pod \"28a15191-c413-47d3-bf30-00cfea074db4\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.029178 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28a15191-c413-47d3-bf30-00cfea074db4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "28a15191-c413-47d3-bf30-00cfea074db4" (UID: "28a15191-c413-47d3-bf30-00cfea074db4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.029341 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28a15191-c413-47d3-bf30-00cfea074db4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "28a15191-c413-47d3-bf30-00cfea074db4" (UID: "28a15191-c413-47d3-bf30-00cfea074db4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.036476 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-kube-api-access-zrw7k" (OuterVolumeSpecName: "kube-api-access-zrw7k") pod "28a15191-c413-47d3-bf30-00cfea074db4" (UID: "28a15191-c413-47d3-bf30-00cfea074db4"). InnerVolumeSpecName "kube-api-access-zrw7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.040825 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "28a15191-c413-47d3-bf30-00cfea074db4" (UID: "28a15191-c413-47d3-bf30-00cfea074db4"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.049632 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb93fd42-19ef-474d-9a43-5f77f863f4f3" path="/var/lib/kubelet/pods/bb93fd42-19ef-474d-9a43-5f77f863f4f3/volumes" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.070415 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "28a15191-c413-47d3-bf30-00cfea074db4" (UID: "28a15191-c413-47d3-bf30-00cfea074db4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.075434 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-config-data" (OuterVolumeSpecName: "config-data") pod "28a15191-c413-47d3-bf30-00cfea074db4" (UID: "28a15191-c413-47d3-bf30-00cfea074db4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.076001 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28a15191-c413-47d3-bf30-00cfea074db4" (UID: "28a15191-c413-47d3-bf30-00cfea074db4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.093858 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "28a15191-c413-47d3-bf30-00cfea074db4" (UID: "28a15191-c413-47d3-bf30-00cfea074db4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.130362 4763 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.130428 4763 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.130449 4763 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.130468 4763 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28a15191-c413-47d3-bf30-00cfea074db4-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.130484 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.130500 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrw7k\" (UniqueName: \"kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-kube-api-access-zrw7k\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.130518 4763 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.130532 4763 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28a15191-c413-47d3-bf30-00cfea074db4-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.707323 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" event={"ID":"28a15191-c413-47d3-bf30-00cfea074db4","Type":"ContainerDied","Data":"5be2d540c55e3fd7015dc71d6c2acdb224aa3a06b89724aa15fe1f287322b4fe"} Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.707409 4763 scope.go:117] "RemoveContainer" containerID="1524702ffca715bd0db7eccd119fb193efb8441cda7e530f3c83b895f51ec885" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.707481 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.746270 4763 scope.go:117] "RemoveContainer" containerID="94d6f7a85414be8a5e46bae79d955e31d2f5e10b15b2ca8ff97bde5b77d90037" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.746747 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj"] Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.755027 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj"] Jan 31 15:29:38 crc kubenswrapper[4763]: I0131 15:29:38.076870 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zznq2" Jan 31 15:29:38 crc kubenswrapper[4763]: I0131 15:29:38.076934 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zznq2" Jan 31 15:29:38 crc kubenswrapper[4763]: I0131 15:29:38.120879 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zznq2" Jan 31 15:29:38 crc kubenswrapper[4763]: I0131 15:29:38.795523 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zznq2" Jan 31 15:29:38 crc kubenswrapper[4763]: I0131 15:29:38.876001 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zznq2"] Jan 31 15:29:39 crc kubenswrapper[4763]: I0131 15:29:39.059953 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28a15191-c413-47d3-bf30-00cfea074db4" path="/var/lib/kubelet/pods/28a15191-c413-47d3-bf30-00cfea074db4/volumes" Jan 31 15:29:40 crc kubenswrapper[4763]: I0131 15:29:40.737625 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zznq2" podUID="f4ae6311-229c-409c-9ef7-42c47c4d009f" containerName="registry-server" containerID="cri-o://3ca5b36f4e58e9485c8e082bd015e8867a1a68d6a5c37c51474b44ec17a46219" gracePeriod=2 Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.165043 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zznq2" Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.304198 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jw9k4\" (UniqueName: \"kubernetes.io/projected/f4ae6311-229c-409c-9ef7-42c47c4d009f-kube-api-access-jw9k4\") pod \"f4ae6311-229c-409c-9ef7-42c47c4d009f\" (UID: \"f4ae6311-229c-409c-9ef7-42c47c4d009f\") " Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.304250 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4ae6311-229c-409c-9ef7-42c47c4d009f-catalog-content\") pod \"f4ae6311-229c-409c-9ef7-42c47c4d009f\" (UID: \"f4ae6311-229c-409c-9ef7-42c47c4d009f\") " Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.304329 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4ae6311-229c-409c-9ef7-42c47c4d009f-utilities\") pod \"f4ae6311-229c-409c-9ef7-42c47c4d009f\" (UID: \"f4ae6311-229c-409c-9ef7-42c47c4d009f\") " Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.307515 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4ae6311-229c-409c-9ef7-42c47c4d009f-utilities" (OuterVolumeSpecName: "utilities") pod "f4ae6311-229c-409c-9ef7-42c47c4d009f" (UID: "f4ae6311-229c-409c-9ef7-42c47c4d009f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.337913 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4ae6311-229c-409c-9ef7-42c47c4d009f-kube-api-access-jw9k4" (OuterVolumeSpecName: "kube-api-access-jw9k4") pod "f4ae6311-229c-409c-9ef7-42c47c4d009f" (UID: "f4ae6311-229c-409c-9ef7-42c47c4d009f"). InnerVolumeSpecName "kube-api-access-jw9k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.347145 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4ae6311-229c-409c-9ef7-42c47c4d009f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4ae6311-229c-409c-9ef7-42c47c4d009f" (UID: "f4ae6311-229c-409c-9ef7-42c47c4d009f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.406475 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jw9k4\" (UniqueName: \"kubernetes.io/projected/f4ae6311-229c-409c-9ef7-42c47c4d009f-kube-api-access-jw9k4\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.406524 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4ae6311-229c-409c-9ef7-42c47c4d009f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.406537 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4ae6311-229c-409c-9ef7-42c47c4d009f-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.747977 4763 generic.go:334] "Generic (PLEG): container finished" podID="f4ae6311-229c-409c-9ef7-42c47c4d009f" containerID="3ca5b36f4e58e9485c8e082bd015e8867a1a68d6a5c37c51474b44ec17a46219" exitCode=0 Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.748070 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zznq2" event={"ID":"f4ae6311-229c-409c-9ef7-42c47c4d009f","Type":"ContainerDied","Data":"3ca5b36f4e58e9485c8e082bd015e8867a1a68d6a5c37c51474b44ec17a46219"} Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.748111 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zznq2" event={"ID":"f4ae6311-229c-409c-9ef7-42c47c4d009f","Type":"ContainerDied","Data":"200089e5c6c719127b84991a83acfdb070ea832ba2581a5989e580c35ecbd770"} Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.748139 4763 scope.go:117] "RemoveContainer" containerID="3ca5b36f4e58e9485c8e082bd015e8867a1a68d6a5c37c51474b44ec17a46219" Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.748934 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zznq2" Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.773706 4763 scope.go:117] "RemoveContainer" containerID="b73bd3f195ed1616bd190447e3ea1f637ad869c5bba2cb9216db2c389f8d9bc7" Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.790924 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zznq2"] Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.795235 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zznq2"] Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.822971 4763 scope.go:117] "RemoveContainer" containerID="789655d3a53368d5a5cb6cfafe1d691c90ec8cdff2c02a3f6f0d8d2c2021826d" Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.854197 4763 scope.go:117] "RemoveContainer" containerID="3ca5b36f4e58e9485c8e082bd015e8867a1a68d6a5c37c51474b44ec17a46219" Jan 31 15:29:41 crc kubenswrapper[4763]: E0131 15:29:41.854874 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ca5b36f4e58e9485c8e082bd015e8867a1a68d6a5c37c51474b44ec17a46219\": container with ID starting with 3ca5b36f4e58e9485c8e082bd015e8867a1a68d6a5c37c51474b44ec17a46219 not found: ID does not exist" containerID="3ca5b36f4e58e9485c8e082bd015e8867a1a68d6a5c37c51474b44ec17a46219" Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.854934 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ca5b36f4e58e9485c8e082bd015e8867a1a68d6a5c37c51474b44ec17a46219"} err="failed to get container status \"3ca5b36f4e58e9485c8e082bd015e8867a1a68d6a5c37c51474b44ec17a46219\": rpc error: code = NotFound desc = could not find container \"3ca5b36f4e58e9485c8e082bd015e8867a1a68d6a5c37c51474b44ec17a46219\": container with ID starting with 3ca5b36f4e58e9485c8e082bd015e8867a1a68d6a5c37c51474b44ec17a46219 not found: ID does not exist" Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.854963 4763 scope.go:117] "RemoveContainer" containerID="b73bd3f195ed1616bd190447e3ea1f637ad869c5bba2cb9216db2c389f8d9bc7" Jan 31 15:29:41 crc kubenswrapper[4763]: E0131 15:29:41.855395 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b73bd3f195ed1616bd190447e3ea1f637ad869c5bba2cb9216db2c389f8d9bc7\": container with ID starting with b73bd3f195ed1616bd190447e3ea1f637ad869c5bba2cb9216db2c389f8d9bc7 not found: ID does not exist" containerID="b73bd3f195ed1616bd190447e3ea1f637ad869c5bba2cb9216db2c389f8d9bc7" Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.855433 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b73bd3f195ed1616bd190447e3ea1f637ad869c5bba2cb9216db2c389f8d9bc7"} err="failed to get container status \"b73bd3f195ed1616bd190447e3ea1f637ad869c5bba2cb9216db2c389f8d9bc7\": rpc error: code = NotFound desc = could not find container \"b73bd3f195ed1616bd190447e3ea1f637ad869c5bba2cb9216db2c389f8d9bc7\": container with ID starting with b73bd3f195ed1616bd190447e3ea1f637ad869c5bba2cb9216db2c389f8d9bc7 not found: ID does not exist" Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.855458 4763 scope.go:117] "RemoveContainer" containerID="789655d3a53368d5a5cb6cfafe1d691c90ec8cdff2c02a3f6f0d8d2c2021826d" Jan 31 15:29:41 crc kubenswrapper[4763]: E0131 15:29:41.855818 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"789655d3a53368d5a5cb6cfafe1d691c90ec8cdff2c02a3f6f0d8d2c2021826d\": container with ID starting with 789655d3a53368d5a5cb6cfafe1d691c90ec8cdff2c02a3f6f0d8d2c2021826d not found: ID does not exist" containerID="789655d3a53368d5a5cb6cfafe1d691c90ec8cdff2c02a3f6f0d8d2c2021826d" Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.855843 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"789655d3a53368d5a5cb6cfafe1d691c90ec8cdff2c02a3f6f0d8d2c2021826d"} err="failed to get container status \"789655d3a53368d5a5cb6cfafe1d691c90ec8cdff2c02a3f6f0d8d2c2021826d\": rpc error: code = NotFound desc = could not find container \"789655d3a53368d5a5cb6cfafe1d691c90ec8cdff2c02a3f6f0d8d2c2021826d\": container with ID starting with 789655d3a53368d5a5cb6cfafe1d691c90ec8cdff2c02a3f6f0d8d2c2021826d not found: ID does not exist" Jan 31 15:29:43 crc kubenswrapper[4763]: I0131 15:29:43.053053 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4ae6311-229c-409c-9ef7-42c47c4d009f" path="/var/lib/kubelet/pods/f4ae6311-229c-409c-9ef7-42c47c4d009f/volumes" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.135441 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497890-dfwzn"] Jan 31 15:30:00 crc kubenswrapper[4763]: E0131 15:30:00.136306 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4ae6311-229c-409c-9ef7-42c47c4d009f" containerName="registry-server" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.136324 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4ae6311-229c-409c-9ef7-42c47c4d009f" containerName="registry-server" Jan 31 15:30:00 crc kubenswrapper[4763]: E0131 15:30:00.136361 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4ae6311-229c-409c-9ef7-42c47c4d009f" containerName="extract-utilities" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.136370 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4ae6311-229c-409c-9ef7-42c47c4d009f" containerName="extract-utilities" Jan 31 15:30:00 crc kubenswrapper[4763]: E0131 15:30:00.136389 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28a15191-c413-47d3-bf30-00cfea074db4" containerName="proxy-httpd" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.136394 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="28a15191-c413-47d3-bf30-00cfea074db4" containerName="proxy-httpd" Jan 31 15:30:00 crc kubenswrapper[4763]: E0131 15:30:00.136404 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28a15191-c413-47d3-bf30-00cfea074db4" containerName="proxy-server" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.136409 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="28a15191-c413-47d3-bf30-00cfea074db4" containerName="proxy-server" Jan 31 15:30:00 crc kubenswrapper[4763]: E0131 15:30:00.136419 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4ae6311-229c-409c-9ef7-42c47c4d009f" containerName="extract-content" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.136424 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4ae6311-229c-409c-9ef7-42c47c4d009f" containerName="extract-content" Jan 31 15:30:00 crc kubenswrapper[4763]: E0131 15:30:00.136431 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb93fd42-19ef-474d-9a43-5f77f863f4f3" containerName="swift-ring-rebalance" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.136436 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb93fd42-19ef-474d-9a43-5f77f863f4f3" containerName="swift-ring-rebalance" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.136581 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb93fd42-19ef-474d-9a43-5f77f863f4f3" containerName="swift-ring-rebalance" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.136597 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4ae6311-229c-409c-9ef7-42c47c4d009f" containerName="registry-server" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.136609 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="28a15191-c413-47d3-bf30-00cfea074db4" containerName="proxy-httpd" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.136618 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="28a15191-c413-47d3-bf30-00cfea074db4" containerName="proxy-server" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.137110 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-dfwzn" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.141552 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.141979 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.148580 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497890-dfwzn"] Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.288225 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87cc2c04-9338-4eae-a93c-6956d090e393-secret-volume\") pod \"collect-profiles-29497890-dfwzn\" (UID: \"87cc2c04-9338-4eae-a93c-6956d090e393\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-dfwzn" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.288339 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cc2c04-9338-4eae-a93c-6956d090e393-config-volume\") pod \"collect-profiles-29497890-dfwzn\" (UID: \"87cc2c04-9338-4eae-a93c-6956d090e393\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-dfwzn" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.288507 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjwgm\" (UniqueName: \"kubernetes.io/projected/87cc2c04-9338-4eae-a93c-6956d090e393-kube-api-access-gjwgm\") pod \"collect-profiles-29497890-dfwzn\" (UID: \"87cc2c04-9338-4eae-a93c-6956d090e393\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-dfwzn" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.390144 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cc2c04-9338-4eae-a93c-6956d090e393-config-volume\") pod \"collect-profiles-29497890-dfwzn\" (UID: \"87cc2c04-9338-4eae-a93c-6956d090e393\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-dfwzn" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.390254 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjwgm\" (UniqueName: \"kubernetes.io/projected/87cc2c04-9338-4eae-a93c-6956d090e393-kube-api-access-gjwgm\") pod \"collect-profiles-29497890-dfwzn\" (UID: \"87cc2c04-9338-4eae-a93c-6956d090e393\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-dfwzn" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.390347 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87cc2c04-9338-4eae-a93c-6956d090e393-secret-volume\") pod \"collect-profiles-29497890-dfwzn\" (UID: \"87cc2c04-9338-4eae-a93c-6956d090e393\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-dfwzn" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.391112 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cc2c04-9338-4eae-a93c-6956d090e393-config-volume\") pod \"collect-profiles-29497890-dfwzn\" (UID: \"87cc2c04-9338-4eae-a93c-6956d090e393\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-dfwzn" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.396484 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87cc2c04-9338-4eae-a93c-6956d090e393-secret-volume\") pod \"collect-profiles-29497890-dfwzn\" (UID: \"87cc2c04-9338-4eae-a93c-6956d090e393\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-dfwzn" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.417681 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjwgm\" (UniqueName: \"kubernetes.io/projected/87cc2c04-9338-4eae-a93c-6956d090e393-kube-api-access-gjwgm\") pod \"collect-profiles-29497890-dfwzn\" (UID: \"87cc2c04-9338-4eae-a93c-6956d090e393\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-dfwzn" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.492578 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-dfwzn" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.902124 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497890-dfwzn"] Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.920427 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-dfwzn" event={"ID":"87cc2c04-9338-4eae-a93c-6956d090e393","Type":"ContainerStarted","Data":"16123395064e18d5cf7cee452356c445960dfae0998242b8495d6fbbb011bed8"} Jan 31 15:30:01 crc kubenswrapper[4763]: I0131 15:30:01.929449 4763 generic.go:334] "Generic (PLEG): container finished" podID="87cc2c04-9338-4eae-a93c-6956d090e393" containerID="f53cb550e4f664c73596f1b533610c1a7d9c3e449ac8df96716136f175c798bb" exitCode=0 Jan 31 15:30:01 crc kubenswrapper[4763]: I0131 15:30:01.929501 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-dfwzn" event={"ID":"87cc2c04-9338-4eae-a93c-6956d090e393","Type":"ContainerDied","Data":"f53cb550e4f664c73596f1b533610c1a7d9c3e449ac8df96716136f175c798bb"} Jan 31 15:30:03 crc kubenswrapper[4763]: I0131 15:30:03.246438 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-dfwzn" Jan 31 15:30:03 crc kubenswrapper[4763]: I0131 15:30:03.336015 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cc2c04-9338-4eae-a93c-6956d090e393-config-volume\") pod \"87cc2c04-9338-4eae-a93c-6956d090e393\" (UID: \"87cc2c04-9338-4eae-a93c-6956d090e393\") " Jan 31 15:30:03 crc kubenswrapper[4763]: I0131 15:30:03.336575 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjwgm\" (UniqueName: \"kubernetes.io/projected/87cc2c04-9338-4eae-a93c-6956d090e393-kube-api-access-gjwgm\") pod \"87cc2c04-9338-4eae-a93c-6956d090e393\" (UID: \"87cc2c04-9338-4eae-a93c-6956d090e393\") " Jan 31 15:30:03 crc kubenswrapper[4763]: I0131 15:30:03.336678 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cc2c04-9338-4eae-a93c-6956d090e393-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cc2c04-9338-4eae-a93c-6956d090e393" (UID: "87cc2c04-9338-4eae-a93c-6956d090e393"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:30:03 crc kubenswrapper[4763]: I0131 15:30:03.337054 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87cc2c04-9338-4eae-a93c-6956d090e393-secret-volume\") pod \"87cc2c04-9338-4eae-a93c-6956d090e393\" (UID: \"87cc2c04-9338-4eae-a93c-6956d090e393\") " Jan 31 15:30:03 crc kubenswrapper[4763]: I0131 15:30:03.337720 4763 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cc2c04-9338-4eae-a93c-6956d090e393-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 15:30:03 crc kubenswrapper[4763]: I0131 15:30:03.341791 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cc2c04-9338-4eae-a93c-6956d090e393-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "87cc2c04-9338-4eae-a93c-6956d090e393" (UID: "87cc2c04-9338-4eae-a93c-6956d090e393"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:30:03 crc kubenswrapper[4763]: I0131 15:30:03.341917 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cc2c04-9338-4eae-a93c-6956d090e393-kube-api-access-gjwgm" (OuterVolumeSpecName: "kube-api-access-gjwgm") pod "87cc2c04-9338-4eae-a93c-6956d090e393" (UID: "87cc2c04-9338-4eae-a93c-6956d090e393"). InnerVolumeSpecName "kube-api-access-gjwgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:30:03 crc kubenswrapper[4763]: I0131 15:30:03.439658 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjwgm\" (UniqueName: \"kubernetes.io/projected/87cc2c04-9338-4eae-a93c-6956d090e393-kube-api-access-gjwgm\") on node \"crc\" DevicePath \"\"" Jan 31 15:30:03 crc kubenswrapper[4763]: I0131 15:30:03.439713 4763 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87cc2c04-9338-4eae-a93c-6956d090e393-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 15:30:03 crc kubenswrapper[4763]: I0131 15:30:03.949830 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-dfwzn" Jan 31 15:30:03 crc kubenswrapper[4763]: I0131 15:30:03.949868 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-dfwzn" event={"ID":"87cc2c04-9338-4eae-a93c-6956d090e393","Type":"ContainerDied","Data":"16123395064e18d5cf7cee452356c445960dfae0998242b8495d6fbbb011bed8"} Jan 31 15:30:03 crc kubenswrapper[4763]: I0131 15:30:03.950848 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16123395064e18d5cf7cee452356c445960dfae0998242b8495d6fbbb011bed8" Jan 31 15:30:04 crc kubenswrapper[4763]: I0131 15:30:04.336552 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989"] Jan 31 15:30:04 crc kubenswrapper[4763]: I0131 15:30:04.347240 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989"] Jan 31 15:30:05 crc kubenswrapper[4763]: I0131 15:30:05.049789 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20c40a34-73d2-4a28-b2bd-31e19e6361d2" path="/var/lib/kubelet/pods/20c40a34-73d2-4a28-b2bd-31e19e6361d2/volumes" Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.000582 4763 generic.go:334] "Generic (PLEG): container finished" podID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerID="b0b563b544c427ad8b7a14d7adac600e852739031807938b6bfb905e2e57f1b5" exitCode=137 Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.000649 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerDied","Data":"b0b563b544c427ad8b7a14d7adac600e852739031807938b6bfb905e2e57f1b5"} Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.078020 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.178338 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.178473 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-lock\") pod \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.178523 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb2lm\" (UniqueName: \"kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-kube-api-access-sb2lm\") pod \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.178567 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-cache\") pod \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.178645 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-combined-ca-bundle\") pod \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.178688 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift\") pod \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.179001 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-lock" (OuterVolumeSpecName: "lock") pod "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" (UID: "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.179224 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-cache" (OuterVolumeSpecName: "cache") pod "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" (UID: "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.179817 4763 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-lock\") on node \"crc\" DevicePath \"\"" Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.179841 4763 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-cache\") on node \"crc\" DevicePath \"\"" Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.183543 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-kube-api-access-sb2lm" (OuterVolumeSpecName: "kube-api-access-sb2lm") pod "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" (UID: "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96"). InnerVolumeSpecName "kube-api-access-sb2lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.186829 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" (UID: "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.199819 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "swift") pod "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" (UID: "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.281994 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.282342 4763 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.282356 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb2lm\" (UniqueName: \"kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-kube-api-access-sb2lm\") on node \"crc\" DevicePath \"\"" Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.322492 4763 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.383943 4763 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.428304 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" (UID: "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.485680 4763 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:30:07 crc kubenswrapper[4763]: I0131 15:30:07.018154 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerDied","Data":"08b94966ac20bf1259e091c08cf195b1a8050f7facc3b6925f12ae4f6758d415"} Jan 31 15:30:07 crc kubenswrapper[4763]: I0131 15:30:07.018246 4763 scope.go:117] "RemoveContainer" containerID="b0b563b544c427ad8b7a14d7adac600e852739031807938b6bfb905e2e57f1b5" Jan 31 15:30:07 crc kubenswrapper[4763]: I0131 15:30:07.018250 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:30:07 crc kubenswrapper[4763]: I0131 15:30:07.050857 4763 scope.go:117] "RemoveContainer" containerID="4ce013d55c9bbe30b3a876684f92e64239ef5b5ebc001cea9bdff39c5628c354" Jan 31 15:30:07 crc kubenswrapper[4763]: I0131 15:30:07.059387 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:30:07 crc kubenswrapper[4763]: I0131 15:30:07.062612 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:30:07 crc kubenswrapper[4763]: I0131 15:30:07.074719 4763 scope.go:117] "RemoveContainer" containerID="13645e6ffc4344b9af2d1ceda74f8853a0dba80f284062e6d075c13aefd2c584" Jan 31 15:30:07 crc kubenswrapper[4763]: I0131 15:30:07.090986 4763 scope.go:117] "RemoveContainer" containerID="f7a6591e71b1676140102a34e2d10a5e2e57e3288fef923f32e0ff238738e50b" Jan 31 15:30:07 crc kubenswrapper[4763]: I0131 15:30:07.108104 4763 scope.go:117] "RemoveContainer" containerID="75623ddf20f8fda7b57c75c03bc9e8bada9ac4dad0d734f2ba1d8790ab36d2c1" Jan 31 15:30:07 crc kubenswrapper[4763]: I0131 15:30:07.128971 4763 scope.go:117] "RemoveContainer" containerID="d475c3ef66bf23eca234aaaa87dccd531be71d1b54e59d9e34a453e486d2922d" Jan 31 15:30:07 crc kubenswrapper[4763]: I0131 15:30:07.144967 4763 scope.go:117] "RemoveContainer" containerID="3c538f45263d816240971928fab851f939c124cdc4848caf9f7b76f74cad7462" Jan 31 15:30:07 crc kubenswrapper[4763]: I0131 15:30:07.159985 4763 scope.go:117] "RemoveContainer" containerID="5cc3982d159833064b5c1cdf1058f4d310c549bc30d9583666bb8c50d8deb789" Jan 31 15:30:07 crc kubenswrapper[4763]: I0131 15:30:07.176310 4763 scope.go:117] "RemoveContainer" containerID="3334bd67954ed15598914f9f4fbef205014fe259154cc379e034e929afe48ba6" Jan 31 15:30:07 crc kubenswrapper[4763]: I0131 15:30:07.194650 4763 scope.go:117] "RemoveContainer" containerID="3faecda72dcb03b6c1659c54cec9556dc478a9f14238f362f4c1b75ec2935478" Jan 31 15:30:07 crc kubenswrapper[4763]: I0131 15:30:07.215549 4763 scope.go:117] "RemoveContainer" containerID="4ff6b19267726798e5bafd41f33c57adc7cb799816fce89a0072cb78d8920082" Jan 31 15:30:07 crc kubenswrapper[4763]: I0131 15:30:07.239548 4763 scope.go:117] "RemoveContainer" containerID="8a5f0dcb0aa81eaac0bb8540fa03cbb7e7d3594bfb5d9f0ba94b71a916c7c113" Jan 31 15:30:07 crc kubenswrapper[4763]: I0131 15:30:07.256989 4763 scope.go:117] "RemoveContainer" containerID="c6776da1a16e11de5e0299bf5a94671ceb2fc44db56762423b817128616ad972" Jan 31 15:30:07 crc kubenswrapper[4763]: I0131 15:30:07.280190 4763 scope.go:117] "RemoveContainer" containerID="27af2de829fbb2ed8d9f250d2e6bfbc1c3ba8467ddb49414f8464eb2d5f6f4ee" Jan 31 15:30:07 crc kubenswrapper[4763]: I0131 15:30:07.298887 4763 scope.go:117] "RemoveContainer" containerID="b7a53ccb6ba4646a1c596b8440d55d97505667eefcbaff4f73ec11d24991e295" Jan 31 15:30:09 crc kubenswrapper[4763]: I0131 15:30:09.052871 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" path="/var/lib/kubelet/pods/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96/volumes" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.066218 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-n76lr/must-gather-dplx7"] Jan 31 15:30:29 crc kubenswrapper[4763]: E0131 15:30:29.067233 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-server" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.067260 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-server" Jan 31 15:30:29 crc kubenswrapper[4763]: E0131 15:30:29.067286 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87cc2c04-9338-4eae-a93c-6956d090e393" containerName="collect-profiles" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.067298 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="87cc2c04-9338-4eae-a93c-6956d090e393" containerName="collect-profiles" Jan 31 15:30:29 crc kubenswrapper[4763]: E0131 15:30:29.067309 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="swift-recon-cron" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.067320 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="swift-recon-cron" Jan 31 15:30:29 crc kubenswrapper[4763]: E0131 15:30:29.067335 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="container-auditor" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.067347 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="container-auditor" Jan 31 15:30:29 crc kubenswrapper[4763]: E0131 15:30:29.067372 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="account-server" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.067382 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="account-server" Jan 31 15:30:29 crc kubenswrapper[4763]: E0131 15:30:29.067396 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="rsync" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.067406 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="rsync" Jan 31 15:30:29 crc kubenswrapper[4763]: E0131 15:30:29.067424 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="account-reaper" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.067434 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="account-reaper" Jan 31 15:30:29 crc kubenswrapper[4763]: E0131 15:30:29.067445 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="container-replicator" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.067456 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="container-replicator" Jan 31 15:30:29 crc kubenswrapper[4763]: E0131 15:30:29.067474 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="account-replicator" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.067484 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="account-replicator" Jan 31 15:30:29 crc kubenswrapper[4763]: E0131 15:30:29.067500 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-replicator" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.067512 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-replicator" Jan 31 15:30:29 crc kubenswrapper[4763]: E0131 15:30:29.067531 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="account-auditor" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.067541 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="account-auditor" Jan 31 15:30:29 crc kubenswrapper[4763]: E0131 15:30:29.067559 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="container-server" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.067569 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="container-server" Jan 31 15:30:29 crc kubenswrapper[4763]: E0131 15:30:29.067579 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-expirer" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.067588 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-expirer" Jan 31 15:30:29 crc kubenswrapper[4763]: E0131 15:30:29.067601 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-updater" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.067611 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-updater" Jan 31 15:30:29 crc kubenswrapper[4763]: E0131 15:30:29.067629 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-auditor" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.067640 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-auditor" Jan 31 15:30:29 crc kubenswrapper[4763]: E0131 15:30:29.067652 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="container-updater" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.067662 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="container-updater" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.068026 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-server" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.068052 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="container-server" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.068066 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="swift-recon-cron" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.068081 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="container-updater" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.068092 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-updater" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.068108 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="account-server" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.068118 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="rsync" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.068130 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="account-reaper" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.068140 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-replicator" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.068151 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="account-auditor" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.068167 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="container-replicator" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.068182 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="account-replicator" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.068198 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-auditor" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.068213 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="container-auditor" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.068228 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-expirer" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.068239 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="87cc2c04-9338-4eae-a93c-6956d090e393" containerName="collect-profiles" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.069298 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76lr/must-gather-dplx7" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.073638 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-n76lr"/"kube-root-ca.crt" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.073730 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-n76lr"/"openshift-service-ca.crt" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.089865 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-n76lr/must-gather-dplx7"] Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.123808 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4jp4\" (UniqueName: \"kubernetes.io/projected/ac81e32a-c558-4275-8b3e-448c797bb0a9-kube-api-access-h4jp4\") pod \"must-gather-dplx7\" (UID: \"ac81e32a-c558-4275-8b3e-448c797bb0a9\") " pod="openshift-must-gather-n76lr/must-gather-dplx7" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.123908 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ac81e32a-c558-4275-8b3e-448c797bb0a9-must-gather-output\") pod \"must-gather-dplx7\" (UID: \"ac81e32a-c558-4275-8b3e-448c797bb0a9\") " pod="openshift-must-gather-n76lr/must-gather-dplx7" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.225349 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4jp4\" (UniqueName: \"kubernetes.io/projected/ac81e32a-c558-4275-8b3e-448c797bb0a9-kube-api-access-h4jp4\") pod \"must-gather-dplx7\" (UID: \"ac81e32a-c558-4275-8b3e-448c797bb0a9\") " pod="openshift-must-gather-n76lr/must-gather-dplx7" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.225434 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ac81e32a-c558-4275-8b3e-448c797bb0a9-must-gather-output\") pod \"must-gather-dplx7\" (UID: \"ac81e32a-c558-4275-8b3e-448c797bb0a9\") " pod="openshift-must-gather-n76lr/must-gather-dplx7" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.225973 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ac81e32a-c558-4275-8b3e-448c797bb0a9-must-gather-output\") pod \"must-gather-dplx7\" (UID: \"ac81e32a-c558-4275-8b3e-448c797bb0a9\") " pod="openshift-must-gather-n76lr/must-gather-dplx7" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.251605 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4jp4\" (UniqueName: \"kubernetes.io/projected/ac81e32a-c558-4275-8b3e-448c797bb0a9-kube-api-access-h4jp4\") pod \"must-gather-dplx7\" (UID: \"ac81e32a-c558-4275-8b3e-448c797bb0a9\") " pod="openshift-must-gather-n76lr/must-gather-dplx7" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.385049 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76lr/must-gather-dplx7" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.793356 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-n76lr/must-gather-dplx7"] Jan 31 15:30:30 crc kubenswrapper[4763]: I0131 15:30:30.235875 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n76lr/must-gather-dplx7" event={"ID":"ac81e32a-c558-4275-8b3e-448c797bb0a9","Type":"ContainerStarted","Data":"e86aab5efd3d42c42e635de7de003266abb97814e03efe21e4dff1b53d4c5441"} Jan 31 15:30:33 crc kubenswrapper[4763]: I0131 15:30:33.169418 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2v8cv"] Jan 31 15:30:33 crc kubenswrapper[4763]: I0131 15:30:33.172243 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2v8cv" Jan 31 15:30:33 crc kubenswrapper[4763]: I0131 15:30:33.182082 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2v8cv"] Jan 31 15:30:33 crc kubenswrapper[4763]: I0131 15:30:33.288750 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t55pd\" (UniqueName: \"kubernetes.io/projected/d4b48077-151c-45b6-bc68-224b69ea1311-kube-api-access-t55pd\") pod \"redhat-operators-2v8cv\" (UID: \"d4b48077-151c-45b6-bc68-224b69ea1311\") " pod="openshift-marketplace/redhat-operators-2v8cv" Jan 31 15:30:33 crc kubenswrapper[4763]: I0131 15:30:33.288885 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b48077-151c-45b6-bc68-224b69ea1311-utilities\") pod \"redhat-operators-2v8cv\" (UID: \"d4b48077-151c-45b6-bc68-224b69ea1311\") " pod="openshift-marketplace/redhat-operators-2v8cv" Jan 31 15:30:33 crc kubenswrapper[4763]: I0131 15:30:33.288973 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b48077-151c-45b6-bc68-224b69ea1311-catalog-content\") pod \"redhat-operators-2v8cv\" (UID: \"d4b48077-151c-45b6-bc68-224b69ea1311\") " pod="openshift-marketplace/redhat-operators-2v8cv" Jan 31 15:30:33 crc kubenswrapper[4763]: I0131 15:30:33.390262 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b48077-151c-45b6-bc68-224b69ea1311-utilities\") pod \"redhat-operators-2v8cv\" (UID: \"d4b48077-151c-45b6-bc68-224b69ea1311\") " pod="openshift-marketplace/redhat-operators-2v8cv" Jan 31 15:30:33 crc kubenswrapper[4763]: I0131 15:30:33.390401 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b48077-151c-45b6-bc68-224b69ea1311-catalog-content\") pod \"redhat-operators-2v8cv\" (UID: \"d4b48077-151c-45b6-bc68-224b69ea1311\") " pod="openshift-marketplace/redhat-operators-2v8cv" Jan 31 15:30:33 crc kubenswrapper[4763]: I0131 15:30:33.390489 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t55pd\" (UniqueName: \"kubernetes.io/projected/d4b48077-151c-45b6-bc68-224b69ea1311-kube-api-access-t55pd\") pod \"redhat-operators-2v8cv\" (UID: \"d4b48077-151c-45b6-bc68-224b69ea1311\") " pod="openshift-marketplace/redhat-operators-2v8cv" Jan 31 15:30:33 crc kubenswrapper[4763]: I0131 15:30:33.390745 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b48077-151c-45b6-bc68-224b69ea1311-utilities\") pod \"redhat-operators-2v8cv\" (UID: \"d4b48077-151c-45b6-bc68-224b69ea1311\") " pod="openshift-marketplace/redhat-operators-2v8cv" Jan 31 15:30:33 crc kubenswrapper[4763]: I0131 15:30:33.391019 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b48077-151c-45b6-bc68-224b69ea1311-catalog-content\") pod \"redhat-operators-2v8cv\" (UID: \"d4b48077-151c-45b6-bc68-224b69ea1311\") " pod="openshift-marketplace/redhat-operators-2v8cv" Jan 31 15:30:33 crc kubenswrapper[4763]: I0131 15:30:33.416770 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t55pd\" (UniqueName: \"kubernetes.io/projected/d4b48077-151c-45b6-bc68-224b69ea1311-kube-api-access-t55pd\") pod \"redhat-operators-2v8cv\" (UID: \"d4b48077-151c-45b6-bc68-224b69ea1311\") " pod="openshift-marketplace/redhat-operators-2v8cv" Jan 31 15:30:33 crc kubenswrapper[4763]: I0131 15:30:33.505933 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2v8cv" Jan 31 15:30:33 crc kubenswrapper[4763]: I0131 15:30:33.721663 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2v8cv"] Jan 31 15:30:33 crc kubenswrapper[4763]: W0131 15:30:33.732872 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4b48077_151c_45b6_bc68_224b69ea1311.slice/crio-19b4c4ad9d2f69e4cf695f14ec510d00e0e9f6b9c213dc65e9fc4fe6eee14517 WatchSource:0}: Error finding container 19b4c4ad9d2f69e4cf695f14ec510d00e0e9f6b9c213dc65e9fc4fe6eee14517: Status 404 returned error can't find the container with id 19b4c4ad9d2f69e4cf695f14ec510d00e0e9f6b9c213dc65e9fc4fe6eee14517 Jan 31 15:30:34 crc kubenswrapper[4763]: I0131 15:30:34.275768 4763 generic.go:334] "Generic (PLEG): container finished" podID="d4b48077-151c-45b6-bc68-224b69ea1311" containerID="a70ef95aa92cbd197490f6845d4fc54c5e33b134d5f67705df1f3bf63c69e11f" exitCode=0 Jan 31 15:30:34 crc kubenswrapper[4763]: I0131 15:30:34.275883 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2v8cv" event={"ID":"d4b48077-151c-45b6-bc68-224b69ea1311","Type":"ContainerDied","Data":"a70ef95aa92cbd197490f6845d4fc54c5e33b134d5f67705df1f3bf63c69e11f"} Jan 31 15:30:34 crc kubenswrapper[4763]: I0131 15:30:34.276135 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2v8cv" event={"ID":"d4b48077-151c-45b6-bc68-224b69ea1311","Type":"ContainerStarted","Data":"19b4c4ad9d2f69e4cf695f14ec510d00e0e9f6b9c213dc65e9fc4fe6eee14517"} Jan 31 15:30:34 crc kubenswrapper[4763]: I0131 15:30:34.280871 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n76lr/must-gather-dplx7" event={"ID":"ac81e32a-c558-4275-8b3e-448c797bb0a9","Type":"ContainerStarted","Data":"f64bd2bbb09dcacba8b86de4bf4d9ce5de51d4c83a47af9ccb5face75600d22f"} Jan 31 15:30:34 crc kubenswrapper[4763]: I0131 15:30:34.280929 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n76lr/must-gather-dplx7" event={"ID":"ac81e32a-c558-4275-8b3e-448c797bb0a9","Type":"ContainerStarted","Data":"d21f71fa91317e79020828c6b9443709046ecbfb96dece8bedb5494c9e3e01f7"} Jan 31 15:30:34 crc kubenswrapper[4763]: I0131 15:30:34.321046 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-n76lr/must-gather-dplx7" podStartSLOduration=1.5278549959999999 podStartE2EDuration="5.321017967s" podCreationTimestamp="2026-01-31 15:30:29 +0000 UTC" firstStartedPulling="2026-01-31 15:30:29.805318981 +0000 UTC m=+2149.560057274" lastFinishedPulling="2026-01-31 15:30:33.598481952 +0000 UTC m=+2153.353220245" observedRunningTime="2026-01-31 15:30:34.314183452 +0000 UTC m=+2154.068921785" watchObservedRunningTime="2026-01-31 15:30:34.321017967 +0000 UTC m=+2154.075756300" Jan 31 15:30:35 crc kubenswrapper[4763]: I0131 15:30:35.288894 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2v8cv" event={"ID":"d4b48077-151c-45b6-bc68-224b69ea1311","Type":"ContainerStarted","Data":"7aa5884b469812be642dcb2acf228213ccf6e2d399c7aacdf4b9d0cec4f26833"} Jan 31 15:30:36 crc kubenswrapper[4763]: I0131 15:30:36.296607 4763 generic.go:334] "Generic (PLEG): container finished" podID="d4b48077-151c-45b6-bc68-224b69ea1311" containerID="7aa5884b469812be642dcb2acf228213ccf6e2d399c7aacdf4b9d0cec4f26833" exitCode=0 Jan 31 15:30:36 crc kubenswrapper[4763]: I0131 15:30:36.296651 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2v8cv" event={"ID":"d4b48077-151c-45b6-bc68-224b69ea1311","Type":"ContainerDied","Data":"7aa5884b469812be642dcb2acf228213ccf6e2d399c7aacdf4b9d0cec4f26833"} Jan 31 15:30:37 crc kubenswrapper[4763]: I0131 15:30:37.305534 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2v8cv" event={"ID":"d4b48077-151c-45b6-bc68-224b69ea1311","Type":"ContainerStarted","Data":"8e258f359d556ec52df9314f5c61020226ed81faa7151c3a364d4c38912f446a"} Jan 31 15:30:37 crc kubenswrapper[4763]: I0131 15:30:37.328299 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2v8cv" podStartSLOduration=1.895086238 podStartE2EDuration="4.328282006s" podCreationTimestamp="2026-01-31 15:30:33 +0000 UTC" firstStartedPulling="2026-01-31 15:30:34.277515794 +0000 UTC m=+2154.032254087" lastFinishedPulling="2026-01-31 15:30:36.710711562 +0000 UTC m=+2156.465449855" observedRunningTime="2026-01-31 15:30:37.32511859 +0000 UTC m=+2157.079856883" watchObservedRunningTime="2026-01-31 15:30:37.328282006 +0000 UTC m=+2157.083020299" Jan 31 15:30:43 crc kubenswrapper[4763]: I0131 15:30:43.176842 4763 scope.go:117] "RemoveContainer" containerID="f8f3a2ee5fed8706cd33e083136df0eff635e736dd9b8ba1a9267757cea26ad5" Jan 31 15:30:43 crc kubenswrapper[4763]: I0131 15:30:43.506982 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2v8cv" Jan 31 15:30:43 crc kubenswrapper[4763]: I0131 15:30:43.507037 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2v8cv" Jan 31 15:30:43 crc kubenswrapper[4763]: I0131 15:30:43.564862 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2v8cv" Jan 31 15:30:44 crc kubenswrapper[4763]: I0131 15:30:44.177034 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:30:44 crc kubenswrapper[4763]: I0131 15:30:44.177476 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:30:44 crc kubenswrapper[4763]: I0131 15:30:44.438720 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2v8cv" Jan 31 15:30:47 crc kubenswrapper[4763]: I0131 15:30:47.151753 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2v8cv"] Jan 31 15:30:47 crc kubenswrapper[4763]: I0131 15:30:47.152227 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2v8cv" podUID="d4b48077-151c-45b6-bc68-224b69ea1311" containerName="registry-server" containerID="cri-o://8e258f359d556ec52df9314f5c61020226ed81faa7151c3a364d4c38912f446a" gracePeriod=2 Jan 31 15:30:47 crc kubenswrapper[4763]: I0131 15:30:47.396669 4763 generic.go:334] "Generic (PLEG): container finished" podID="d4b48077-151c-45b6-bc68-224b69ea1311" containerID="8e258f359d556ec52df9314f5c61020226ed81faa7151c3a364d4c38912f446a" exitCode=0 Jan 31 15:30:47 crc kubenswrapper[4763]: I0131 15:30:47.396730 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2v8cv" event={"ID":"d4b48077-151c-45b6-bc68-224b69ea1311","Type":"ContainerDied","Data":"8e258f359d556ec52df9314f5c61020226ed81faa7151c3a364d4c38912f446a"} Jan 31 15:30:47 crc kubenswrapper[4763]: I0131 15:30:47.561326 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2v8cv" Jan 31 15:30:47 crc kubenswrapper[4763]: I0131 15:30:47.583262 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b48077-151c-45b6-bc68-224b69ea1311-utilities\") pod \"d4b48077-151c-45b6-bc68-224b69ea1311\" (UID: \"d4b48077-151c-45b6-bc68-224b69ea1311\") " Jan 31 15:30:47 crc kubenswrapper[4763]: I0131 15:30:47.583473 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t55pd\" (UniqueName: \"kubernetes.io/projected/d4b48077-151c-45b6-bc68-224b69ea1311-kube-api-access-t55pd\") pod \"d4b48077-151c-45b6-bc68-224b69ea1311\" (UID: \"d4b48077-151c-45b6-bc68-224b69ea1311\") " Jan 31 15:30:47 crc kubenswrapper[4763]: I0131 15:30:47.584304 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4b48077-151c-45b6-bc68-224b69ea1311-utilities" (OuterVolumeSpecName: "utilities") pod "d4b48077-151c-45b6-bc68-224b69ea1311" (UID: "d4b48077-151c-45b6-bc68-224b69ea1311"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:30:47 crc kubenswrapper[4763]: I0131 15:30:47.584615 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b48077-151c-45b6-bc68-224b69ea1311-catalog-content\") pod \"d4b48077-151c-45b6-bc68-224b69ea1311\" (UID: \"d4b48077-151c-45b6-bc68-224b69ea1311\") " Jan 31 15:30:47 crc kubenswrapper[4763]: I0131 15:30:47.585065 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b48077-151c-45b6-bc68-224b69ea1311-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:30:47 crc kubenswrapper[4763]: I0131 15:30:47.592133 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4b48077-151c-45b6-bc68-224b69ea1311-kube-api-access-t55pd" (OuterVolumeSpecName: "kube-api-access-t55pd") pod "d4b48077-151c-45b6-bc68-224b69ea1311" (UID: "d4b48077-151c-45b6-bc68-224b69ea1311"). InnerVolumeSpecName "kube-api-access-t55pd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:30:47 crc kubenswrapper[4763]: I0131 15:30:47.686985 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t55pd\" (UniqueName: \"kubernetes.io/projected/d4b48077-151c-45b6-bc68-224b69ea1311-kube-api-access-t55pd\") on node \"crc\" DevicePath \"\"" Jan 31 15:30:47 crc kubenswrapper[4763]: I0131 15:30:47.711893 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4b48077-151c-45b6-bc68-224b69ea1311-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4b48077-151c-45b6-bc68-224b69ea1311" (UID: "d4b48077-151c-45b6-bc68-224b69ea1311"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:30:47 crc kubenswrapper[4763]: I0131 15:30:47.787991 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b48077-151c-45b6-bc68-224b69ea1311-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:30:48 crc kubenswrapper[4763]: I0131 15:30:48.406487 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2v8cv" event={"ID":"d4b48077-151c-45b6-bc68-224b69ea1311","Type":"ContainerDied","Data":"19b4c4ad9d2f69e4cf695f14ec510d00e0e9f6b9c213dc65e9fc4fe6eee14517"} Jan 31 15:30:48 crc kubenswrapper[4763]: I0131 15:30:48.406540 4763 scope.go:117] "RemoveContainer" containerID="8e258f359d556ec52df9314f5c61020226ed81faa7151c3a364d4c38912f446a" Jan 31 15:30:48 crc kubenswrapper[4763]: I0131 15:30:48.406588 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2v8cv" Jan 31 15:30:48 crc kubenswrapper[4763]: I0131 15:30:48.437446 4763 scope.go:117] "RemoveContainer" containerID="7aa5884b469812be642dcb2acf228213ccf6e2d399c7aacdf4b9d0cec4f26833" Jan 31 15:30:48 crc kubenswrapper[4763]: I0131 15:30:48.442871 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2v8cv"] Jan 31 15:30:48 crc kubenswrapper[4763]: I0131 15:30:48.460938 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2v8cv"] Jan 31 15:30:48 crc kubenswrapper[4763]: I0131 15:30:48.474390 4763 scope.go:117] "RemoveContainer" containerID="a70ef95aa92cbd197490f6845d4fc54c5e33b134d5f67705df1f3bf63c69e11f" Jan 31 15:30:49 crc kubenswrapper[4763]: I0131 15:30:49.050914 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4b48077-151c-45b6-bc68-224b69ea1311" path="/var/lib/kubelet/pods/d4b48077-151c-45b6-bc68-224b69ea1311/volumes" Jan 31 15:31:14 crc kubenswrapper[4763]: I0131 15:31:14.176903 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:31:14 crc kubenswrapper[4763]: I0131 15:31:14.177411 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:31:17 crc kubenswrapper[4763]: I0131 15:31:17.171385 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck_82458dee-ae6f-46c9-ac1b-745146c8b9bf/util/0.log" Jan 31 15:31:17 crc kubenswrapper[4763]: I0131 15:31:17.321172 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck_82458dee-ae6f-46c9-ac1b-745146c8b9bf/util/0.log" Jan 31 15:31:17 crc kubenswrapper[4763]: I0131 15:31:17.331590 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck_82458dee-ae6f-46c9-ac1b-745146c8b9bf/pull/0.log" Jan 31 15:31:17 crc kubenswrapper[4763]: I0131 15:31:17.372175 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck_82458dee-ae6f-46c9-ac1b-745146c8b9bf/pull/0.log" Jan 31 15:31:17 crc kubenswrapper[4763]: I0131 15:31:17.525585 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck_82458dee-ae6f-46c9-ac1b-745146c8b9bf/util/0.log" Jan 31 15:31:17 crc kubenswrapper[4763]: I0131 15:31:17.536266 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck_82458dee-ae6f-46c9-ac1b-745146c8b9bf/extract/0.log" Jan 31 15:31:17 crc kubenswrapper[4763]: I0131 15:31:17.548578 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck_82458dee-ae6f-46c9-ac1b-745146c8b9bf/pull/0.log" Jan 31 15:31:17 crc kubenswrapper[4763]: I0131 15:31:17.709373 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr_b6835994-86f5-4950-b010-780530fceffe/util/0.log" Jan 31 15:31:17 crc kubenswrapper[4763]: I0131 15:31:17.811179 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr_b6835994-86f5-4950-b010-780530fceffe/util/0.log" Jan 31 15:31:17 crc kubenswrapper[4763]: I0131 15:31:17.812100 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr_b6835994-86f5-4950-b010-780530fceffe/pull/0.log" Jan 31 15:31:17 crc kubenswrapper[4763]: I0131 15:31:17.860919 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr_b6835994-86f5-4950-b010-780530fceffe/pull/0.log" Jan 31 15:31:18 crc kubenswrapper[4763]: I0131 15:31:18.056367 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr_b6835994-86f5-4950-b010-780530fceffe/util/0.log" Jan 31 15:31:18 crc kubenswrapper[4763]: I0131 15:31:18.056566 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr_b6835994-86f5-4950-b010-780530fceffe/extract/0.log" Jan 31 15:31:18 crc kubenswrapper[4763]: I0131 15:31:18.058498 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr_b6835994-86f5-4950-b010-780530fceffe/pull/0.log" Jan 31 15:31:18 crc kubenswrapper[4763]: I0131 15:31:18.222776 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f_6fb80892-b089-4dff-baa8-44ffdf6b9b84/util/0.log" Jan 31 15:31:18 crc kubenswrapper[4763]: I0131 15:31:18.384256 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f_6fb80892-b089-4dff-baa8-44ffdf6b9b84/pull/0.log" Jan 31 15:31:18 crc kubenswrapper[4763]: I0131 15:31:18.413007 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f_6fb80892-b089-4dff-baa8-44ffdf6b9b84/pull/0.log" Jan 31 15:31:18 crc kubenswrapper[4763]: I0131 15:31:18.420772 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f_6fb80892-b089-4dff-baa8-44ffdf6b9b84/util/0.log" Jan 31 15:31:18 crc kubenswrapper[4763]: I0131 15:31:18.602165 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f_6fb80892-b089-4dff-baa8-44ffdf6b9b84/util/0.log" Jan 31 15:31:18 crc kubenswrapper[4763]: I0131 15:31:18.603162 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f_6fb80892-b089-4dff-baa8-44ffdf6b9b84/extract/0.log" Jan 31 15:31:18 crc kubenswrapper[4763]: I0131 15:31:18.641073 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f_6fb80892-b089-4dff-baa8-44ffdf6b9b84/pull/0.log" Jan 31 15:31:18 crc kubenswrapper[4763]: I0131 15:31:18.750661 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k_7b615184-cd97-4133-b2e4-fc44e41d1e6b/util/0.log" Jan 31 15:31:18 crc kubenswrapper[4763]: I0131 15:31:18.941368 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k_7b615184-cd97-4133-b2e4-fc44e41d1e6b/pull/0.log" Jan 31 15:31:18 crc kubenswrapper[4763]: I0131 15:31:18.954655 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k_7b615184-cd97-4133-b2e4-fc44e41d1e6b/util/0.log" Jan 31 15:31:19 crc kubenswrapper[4763]: I0131 15:31:19.000493 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k_7b615184-cd97-4133-b2e4-fc44e41d1e6b/pull/0.log" Jan 31 15:31:19 crc kubenswrapper[4763]: I0131 15:31:19.193961 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k_7b615184-cd97-4133-b2e4-fc44e41d1e6b/util/0.log" Jan 31 15:31:19 crc kubenswrapper[4763]: I0131 15:31:19.215329 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k_7b615184-cd97-4133-b2e4-fc44e41d1e6b/pull/0.log" Jan 31 15:31:19 crc kubenswrapper[4763]: I0131 15:31:19.253593 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k_7b615184-cd97-4133-b2e4-fc44e41d1e6b/extract/0.log" Jan 31 15:31:19 crc kubenswrapper[4763]: I0131 15:31:19.439803 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-index-2w984_ef84b681-2ea6-4684-84c0-6d452a5b47df/registry-server/0.log" Jan 31 15:31:19 crc kubenswrapper[4763]: I0131 15:31:19.778082 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb_3636515d-8655-48d7-b0f6-54e4c6635f1c/util/0.log" Jan 31 15:31:19 crc kubenswrapper[4763]: I0131 15:31:19.988924 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb_3636515d-8655-48d7-b0f6-54e4c6635f1c/util/0.log" Jan 31 15:31:19 crc kubenswrapper[4763]: I0131 15:31:19.991024 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb_3636515d-8655-48d7-b0f6-54e4c6635f1c/pull/0.log" Jan 31 15:31:20 crc kubenswrapper[4763]: I0131 15:31:20.012760 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb_3636515d-8655-48d7-b0f6-54e4c6635f1c/pull/0.log" Jan 31 15:31:20 crc kubenswrapper[4763]: I0131 15:31:20.184802 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb_3636515d-8655-48d7-b0f6-54e4c6635f1c/util/0.log" Jan 31 15:31:20 crc kubenswrapper[4763]: I0131 15:31:20.243825 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb_3636515d-8655-48d7-b0f6-54e4c6635f1c/pull/0.log" Jan 31 15:31:20 crc kubenswrapper[4763]: I0131 15:31:20.261533 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb_3636515d-8655-48d7-b0f6-54e4c6635f1c/extract/0.log" Jan 31 15:31:20 crc kubenswrapper[4763]: I0131 15:31:20.415472 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s_50493718-9240-44a6-bb1a-4c6c97473f2d/util/0.log" Jan 31 15:31:20 crc kubenswrapper[4763]: I0131 15:31:20.638311 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s_50493718-9240-44a6-bb1a-4c6c97473f2d/pull/0.log" Jan 31 15:31:20 crc kubenswrapper[4763]: I0131 15:31:20.643406 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s_50493718-9240-44a6-bb1a-4c6c97473f2d/util/0.log" Jan 31 15:31:20 crc kubenswrapper[4763]: I0131 15:31:20.654890 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s_50493718-9240-44a6-bb1a-4c6c97473f2d/pull/0.log" Jan 31 15:31:20 crc kubenswrapper[4763]: I0131 15:31:20.844639 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s_50493718-9240-44a6-bb1a-4c6c97473f2d/util/0.log" Jan 31 15:31:20 crc kubenswrapper[4763]: I0131 15:31:20.873227 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s_50493718-9240-44a6-bb1a-4c6c97473f2d/extract/0.log" Jan 31 15:31:20 crc kubenswrapper[4763]: I0131 15:31:20.915942 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s_50493718-9240-44a6-bb1a-4c6c97473f2d/pull/0.log" Jan 31 15:31:21 crc kubenswrapper[4763]: I0131 15:31:21.100501 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-5b76796566-wfzb5_ff757490-bd0f-4140-9f70-e5ec9d26353f/manager/0.log" Jan 31 15:31:21 crc kubenswrapper[4763]: I0131 15:31:21.193499 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-index-9vcjd_df73235a-c7ce-449c-b163-341974166624/registry-server/0.log" Jan 31 15:31:21 crc kubenswrapper[4763]: I0131 15:31:21.328639 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-index-njgcq_191c97ac-f003-4a51-8f06-395adf3ac8a7/registry-server/0.log" Jan 31 15:31:21 crc kubenswrapper[4763]: I0131 15:31:21.368109 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7598465c56-xt6m7_970b855e-e278-4e6b-b9ba-733f8f798f59/manager/0.log" Jan 31 15:31:21 crc kubenswrapper[4763]: I0131 15:31:21.524600 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-68956c85f5-mrnqc_30bcffc2-0054-475e-af66-74b73ec95edb/manager/0.log" Jan 31 15:31:21 crc kubenswrapper[4763]: I0131 15:31:21.602479 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-index-d2rtv_29673dd0-5315-4de5-bbc4-d8deb8581b9d/registry-server/0.log" Jan 31 15:31:21 crc kubenswrapper[4763]: I0131 15:31:21.744484 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-779fc9694b-2ltrp_8225c1b7-e70c-4eac-8c03-c85f86ccba6b/operator/0.log" Jan 31 15:31:21 crc kubenswrapper[4763]: I0131 15:31:21.870364 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-index-l9x4g_6fa47f40-fce4-4e57-aebb-3313c4c996dd/registry-server/0.log" Jan 31 15:31:22 crc kubenswrapper[4763]: I0131 15:31:22.074373 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-77769db8d5-gb8pv_42b142bb-6946-4933-841b-33c9fc9899b2/manager/0.log" Jan 31 15:31:22 crc kubenswrapper[4763]: I0131 15:31:22.105972 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-index-h5chr_2c571391-06de-46b1-8932-99d44a63dc42/registry-server/0.log" Jan 31 15:31:22 crc kubenswrapper[4763]: I0131 15:31:22.375758 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-56656dfbf6-dtcnh_8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5/manager/0.log" Jan 31 15:31:37 crc kubenswrapper[4763]: I0131 15:31:37.271106 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-dncp9_a7826828-7856-44a4-be9f-f1a939950c3e/control-plane-machine-set-operator/0.log" Jan 31 15:31:37 crc kubenswrapper[4763]: I0131 15:31:37.406963 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-bwc2g_5d9ac26c-eb66-4772-b7ee-a6b646092c4b/kube-rbac-proxy/0.log" Jan 31 15:31:37 crc kubenswrapper[4763]: I0131 15:31:37.444740 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-bwc2g_5d9ac26c-eb66-4772-b7ee-a6b646092c4b/machine-api-operator/0.log" Jan 31 15:31:44 crc kubenswrapper[4763]: I0131 15:31:44.177399 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:31:44 crc kubenswrapper[4763]: I0131 15:31:44.177980 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:31:44 crc kubenswrapper[4763]: I0131 15:31:44.178024 4763 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 15:31:44 crc kubenswrapper[4763]: I0131 15:31:44.178547 4763 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb"} pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 15:31:44 crc kubenswrapper[4763]: I0131 15:31:44.178599 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" containerID="cri-o://53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" gracePeriod=600 Jan 31 15:31:44 crc kubenswrapper[4763]: I0131 15:31:44.786996 4763 generic.go:334] "Generic (PLEG): container finished" podID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" exitCode=0 Jan 31 15:31:44 crc kubenswrapper[4763]: I0131 15:31:44.787075 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerDied","Data":"53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb"} Jan 31 15:31:44 crc kubenswrapper[4763]: I0131 15:31:44.787343 4763 scope.go:117] "RemoveContainer" containerID="2bfabe5df54337c4196a35b7009ba75d2a72e0f1ccf13fd0226987ae1c4b7c11" Jan 31 15:31:44 crc kubenswrapper[4763]: E0131 15:31:44.807442 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:31:45 crc kubenswrapper[4763]: I0131 15:31:45.794853 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:31:45 crc kubenswrapper[4763]: E0131 15:31:45.795117 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:31:57 crc kubenswrapper[4763]: I0131 15:31:57.042787 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:31:57 crc kubenswrapper[4763]: E0131 15:31:57.043791 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:32:06 crc kubenswrapper[4763]: I0131 15:32:06.793438 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-f4wjv_30f91c96-0c0b-4426-986d-715d11a222b3/kube-rbac-proxy/0.log" Jan 31 15:32:06 crc kubenswrapper[4763]: I0131 15:32:06.941352 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-f4wjv_30f91c96-0c0b-4426-986d-715d11a222b3/controller/0.log" Jan 31 15:32:07 crc kubenswrapper[4763]: I0131 15:32:07.020014 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ft4k2_35cf5cc4-3973-4d1c-b52a-804293bb1f25/cp-frr-files/0.log" Jan 31 15:32:07 crc kubenswrapper[4763]: I0131 15:32:07.222033 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ft4k2_35cf5cc4-3973-4d1c-b52a-804293bb1f25/cp-reloader/0.log" Jan 31 15:32:07 crc kubenswrapper[4763]: I0131 15:32:07.224199 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ft4k2_35cf5cc4-3973-4d1c-b52a-804293bb1f25/cp-reloader/0.log" Jan 31 15:32:07 crc kubenswrapper[4763]: I0131 15:32:07.252171 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ft4k2_35cf5cc4-3973-4d1c-b52a-804293bb1f25/cp-metrics/0.log" Jan 31 15:32:07 crc kubenswrapper[4763]: I0131 15:32:07.252238 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ft4k2_35cf5cc4-3973-4d1c-b52a-804293bb1f25/cp-frr-files/0.log" Jan 31 15:32:07 crc kubenswrapper[4763]: I0131 15:32:07.406783 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ft4k2_35cf5cc4-3973-4d1c-b52a-804293bb1f25/cp-frr-files/0.log" Jan 31 15:32:07 crc kubenswrapper[4763]: I0131 15:32:07.416883 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ft4k2_35cf5cc4-3973-4d1c-b52a-804293bb1f25/cp-metrics/0.log" Jan 31 15:32:07 crc kubenswrapper[4763]: I0131 15:32:07.423151 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ft4k2_35cf5cc4-3973-4d1c-b52a-804293bb1f25/cp-reloader/0.log" Jan 31 15:32:07 crc kubenswrapper[4763]: I0131 15:32:07.427118 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ft4k2_35cf5cc4-3973-4d1c-b52a-804293bb1f25/cp-metrics/0.log" Jan 31 15:32:07 crc kubenswrapper[4763]: I0131 15:32:07.591847 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ft4k2_35cf5cc4-3973-4d1c-b52a-804293bb1f25/cp-metrics/0.log" Jan 31 15:32:07 crc kubenswrapper[4763]: I0131 15:32:07.599827 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ft4k2_35cf5cc4-3973-4d1c-b52a-804293bb1f25/controller/0.log" Jan 31 15:32:07 crc kubenswrapper[4763]: I0131 15:32:07.614006 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ft4k2_35cf5cc4-3973-4d1c-b52a-804293bb1f25/cp-reloader/0.log" Jan 31 15:32:07 crc kubenswrapper[4763]: I0131 15:32:07.628132 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ft4k2_35cf5cc4-3973-4d1c-b52a-804293bb1f25/cp-frr-files/0.log" Jan 31 15:32:07 crc kubenswrapper[4763]: I0131 15:32:07.748751 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ft4k2_35cf5cc4-3973-4d1c-b52a-804293bb1f25/frr-metrics/0.log" Jan 31 15:32:07 crc kubenswrapper[4763]: I0131 15:32:07.781051 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ft4k2_35cf5cc4-3973-4d1c-b52a-804293bb1f25/kube-rbac-proxy/0.log" Jan 31 15:32:07 crc kubenswrapper[4763]: I0131 15:32:07.857488 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ft4k2_35cf5cc4-3973-4d1c-b52a-804293bb1f25/kube-rbac-proxy-frr/0.log" Jan 31 15:32:08 crc kubenswrapper[4763]: I0131 15:32:08.038407 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-wwdkt_d9c89dc4-758c-449e-bd6c-76f27ee6ecec/frr-k8s-webhook-server/0.log" Jan 31 15:32:08 crc kubenswrapper[4763]: I0131 15:32:08.057828 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ft4k2_35cf5cc4-3973-4d1c-b52a-804293bb1f25/reloader/0.log" Jan 31 15:32:08 crc kubenswrapper[4763]: I0131 15:32:08.250429 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-64b6b97b4f-gbf25_5a42a356-dc67-417c-b291-c079e880aa79/manager/0.log" Jan 31 15:32:08 crc kubenswrapper[4763]: I0131 15:32:08.254120 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ft4k2_35cf5cc4-3973-4d1c-b52a-804293bb1f25/frr/0.log" Jan 31 15:32:08 crc kubenswrapper[4763]: I0131 15:32:08.407264 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6448f7d6f6-k9gcj_911c2e7f-03a5-49a2-8db7-5c63c602ef29/webhook-server/0.log" Jan 31 15:32:08 crc kubenswrapper[4763]: I0131 15:32:08.420530 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-kf27r_8fe7a08d-0d51-422f-9477-932841b77158/kube-rbac-proxy/0.log" Jan 31 15:32:08 crc kubenswrapper[4763]: I0131 15:32:08.607354 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-kf27r_8fe7a08d-0d51-422f-9477-932841b77158/speaker/0.log" Jan 31 15:32:12 crc kubenswrapper[4763]: I0131 15:32:12.041611 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:32:12 crc kubenswrapper[4763]: E0131 15:32:12.042132 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:32:22 crc kubenswrapper[4763]: I0131 15:32:22.942359 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_barbican-api-697dc779fb-sgr8v_5d7c9f19-bf9f-4c6c-a113-a10d6be02620/barbican-api/0.log" Jan 31 15:32:22 crc kubenswrapper[4763]: I0131 15:32:22.987789 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_barbican-api-697dc779fb-sgr8v_5d7c9f19-bf9f-4c6c-a113-a10d6be02620/barbican-api-log/0.log" Jan 31 15:32:23 crc kubenswrapper[4763]: I0131 15:32:23.097957 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_barbican-db-sync-q2gqt_d76ca4ae-ac08-455d-af41-ec673a980e8e/barbican-db-sync/0.log" Jan 31 15:32:23 crc kubenswrapper[4763]: I0131 15:32:23.150664 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_barbican-keystone-listener-5b8c7cdd44-cxsxt_ae9bd061-c69e-4ff5-acd4-2b953c4b1657/barbican-keystone-listener/0.log" Jan 31 15:32:23 crc kubenswrapper[4763]: I0131 15:32:23.268474 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_barbican-keystone-listener-5b8c7cdd44-cxsxt_ae9bd061-c69e-4ff5-acd4-2b953c4b1657/barbican-keystone-listener-log/0.log" Jan 31 15:32:23 crc kubenswrapper[4763]: I0131 15:32:23.307055 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_barbican-worker-5477f7cb8f-8rssm_49dd2bcf-ceb5-4df8-8a24-eec8de703f88/barbican-worker/0.log" Jan 31 15:32:23 crc kubenswrapper[4763]: I0131 15:32:23.372405 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_barbican-worker-5477f7cb8f-8rssm_49dd2bcf-ceb5-4df8-8a24-eec8de703f88/barbican-worker-log/0.log" Jan 31 15:32:23 crc kubenswrapper[4763]: I0131 15:32:23.745569 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_openstack-galera-0_dc474c59-7d29-4ce0-86c8-07d96c462b4e/mysql-bootstrap/0.log" Jan 31 15:32:23 crc kubenswrapper[4763]: I0131 15:32:23.920614 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_openstack-galera-0_dc474c59-7d29-4ce0-86c8-07d96c462b4e/mysql-bootstrap/0.log" Jan 31 15:32:23 crc kubenswrapper[4763]: I0131 15:32:23.924914 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_keystone-7659668474-6698l_791f5002-b2b5-488c-99c8-5ed511cffed2/keystone-api/0.log" Jan 31 15:32:23 crc kubenswrapper[4763]: I0131 15:32:23.925998 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_openstack-galera-0_dc474c59-7d29-4ce0-86c8-07d96c462b4e/galera/0.log" Jan 31 15:32:24 crc kubenswrapper[4763]: I0131 15:32:24.172903 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_openstack-galera-1_e5a89037-391b-4806-8f01-09ddd6a4d13e/mysql-bootstrap/0.log" Jan 31 15:32:24 crc kubenswrapper[4763]: I0131 15:32:24.349153 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_openstack-galera-1_e5a89037-391b-4806-8f01-09ddd6a4d13e/galera/0.log" Jan 31 15:32:24 crc kubenswrapper[4763]: I0131 15:32:24.396331 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_openstack-galera-1_e5a89037-391b-4806-8f01-09ddd6a4d13e/mysql-bootstrap/0.log" Jan 31 15:32:24 crc kubenswrapper[4763]: I0131 15:32:24.544534 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_openstack-galera-2_cd0d5ccb-1d59-428e-9a53-17427cd0e5dc/mysql-bootstrap/0.log" Jan 31 15:32:24 crc kubenswrapper[4763]: I0131 15:32:24.731592 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_openstack-galera-2_cd0d5ccb-1d59-428e-9a53-17427cd0e5dc/mysql-bootstrap/0.log" Jan 31 15:32:24 crc kubenswrapper[4763]: I0131 15:32:24.756093 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_openstack-galera-2_cd0d5ccb-1d59-428e-9a53-17427cd0e5dc/galera/0.log" Jan 31 15:32:24 crc kubenswrapper[4763]: I0131 15:32:24.921262 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_rabbitmq-server-0_dee0d43f-8ff0-4094-9833-92cda38ee182/setup-container/0.log" Jan 31 15:32:25 crc kubenswrapper[4763]: I0131 15:32:25.108954 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_rabbitmq-server-0_dee0d43f-8ff0-4094-9833-92cda38ee182/rabbitmq/0.log" Jan 31 15:32:25 crc kubenswrapper[4763]: I0131 15:32:25.182911 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_rabbitmq-server-0_dee0d43f-8ff0-4094-9833-92cda38ee182/setup-container/0.log" Jan 31 15:32:25 crc kubenswrapper[4763]: I0131 15:32:25.608060 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_memcached-0_ecb69fa0-2df1-477e-a257-05e0f1dd1c76/memcached/0.log" Jan 31 15:32:27 crc kubenswrapper[4763]: I0131 15:32:27.042017 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:32:27 crc kubenswrapper[4763]: E0131 15:32:27.042677 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:32:38 crc kubenswrapper[4763]: I0131 15:32:38.288054 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9_0f29e959-ca5d-4407-ac1d-4ce7001597aa/util/0.log" Jan 31 15:32:38 crc kubenswrapper[4763]: I0131 15:32:38.431838 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9_0f29e959-ca5d-4407-ac1d-4ce7001597aa/util/0.log" Jan 31 15:32:38 crc kubenswrapper[4763]: I0131 15:32:38.463173 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9_0f29e959-ca5d-4407-ac1d-4ce7001597aa/pull/0.log" Jan 31 15:32:38 crc kubenswrapper[4763]: I0131 15:32:38.493020 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9_0f29e959-ca5d-4407-ac1d-4ce7001597aa/pull/0.log" Jan 31 15:32:38 crc kubenswrapper[4763]: I0131 15:32:38.624317 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9_0f29e959-ca5d-4407-ac1d-4ce7001597aa/pull/0.log" Jan 31 15:32:38 crc kubenswrapper[4763]: I0131 15:32:38.653358 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9_0f29e959-ca5d-4407-ac1d-4ce7001597aa/util/0.log" Jan 31 15:32:38 crc kubenswrapper[4763]: I0131 15:32:38.705348 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9_0f29e959-ca5d-4407-ac1d-4ce7001597aa/extract/0.log" Jan 31 15:32:38 crc kubenswrapper[4763]: I0131 15:32:38.796244 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-frpn9_5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3/extract-utilities/0.log" Jan 31 15:32:38 crc kubenswrapper[4763]: I0131 15:32:38.931682 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-frpn9_5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3/extract-content/0.log" Jan 31 15:32:38 crc kubenswrapper[4763]: I0131 15:32:38.933214 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-frpn9_5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3/extract-utilities/0.log" Jan 31 15:32:38 crc kubenswrapper[4763]: I0131 15:32:38.955806 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-frpn9_5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3/extract-content/0.log" Jan 31 15:32:39 crc kubenswrapper[4763]: I0131 15:32:39.092176 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-frpn9_5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3/extract-utilities/0.log" Jan 31 15:32:39 crc kubenswrapper[4763]: I0131 15:32:39.098240 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-frpn9_5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3/extract-content/0.log" Jan 31 15:32:39 crc kubenswrapper[4763]: I0131 15:32:39.292470 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-v59tf_e2f6ea13-f993-4138-b5d5-a549e9aae21b/extract-utilities/0.log" Jan 31 15:32:39 crc kubenswrapper[4763]: I0131 15:32:39.415061 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-frpn9_5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3/registry-server/0.log" Jan 31 15:32:39 crc kubenswrapper[4763]: I0131 15:32:39.448017 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-v59tf_e2f6ea13-f993-4138-b5d5-a549e9aae21b/extract-content/0.log" Jan 31 15:32:39 crc kubenswrapper[4763]: I0131 15:32:39.498393 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-v59tf_e2f6ea13-f993-4138-b5d5-a549e9aae21b/extract-utilities/0.log" Jan 31 15:32:39 crc kubenswrapper[4763]: I0131 15:32:39.527154 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-v59tf_e2f6ea13-f993-4138-b5d5-a549e9aae21b/extract-content/0.log" Jan 31 15:32:39 crc kubenswrapper[4763]: I0131 15:32:39.628246 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-v59tf_e2f6ea13-f993-4138-b5d5-a549e9aae21b/extract-utilities/0.log" Jan 31 15:32:39 crc kubenswrapper[4763]: I0131 15:32:39.631060 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-v59tf_e2f6ea13-f993-4138-b5d5-a549e9aae21b/extract-content/0.log" Jan 31 15:32:39 crc kubenswrapper[4763]: I0131 15:32:39.795306 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-gg2dq_38baa8fd-7b8e-4c7b-ac03-d739f10d242a/marketplace-operator/0.log" Jan 31 15:32:40 crc kubenswrapper[4763]: I0131 15:32:40.013770 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-znznv_d877abcd-9d8f-4597-b41c-4026d954cc62/extract-utilities/0.log" Jan 31 15:32:40 crc kubenswrapper[4763]: I0131 15:32:40.041295 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:32:40 crc kubenswrapper[4763]: E0131 15:32:40.041575 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:32:40 crc kubenswrapper[4763]: I0131 15:32:40.101034 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-znznv_d877abcd-9d8f-4597-b41c-4026d954cc62/extract-content/0.log" Jan 31 15:32:40 crc kubenswrapper[4763]: I0131 15:32:40.106556 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-znznv_d877abcd-9d8f-4597-b41c-4026d954cc62/extract-utilities/0.log" Jan 31 15:32:40 crc kubenswrapper[4763]: I0131 15:32:40.175686 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-znznv_d877abcd-9d8f-4597-b41c-4026d954cc62/extract-content/0.log" Jan 31 15:32:40 crc kubenswrapper[4763]: I0131 15:32:40.195136 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-v59tf_e2f6ea13-f993-4138-b5d5-a549e9aae21b/registry-server/0.log" Jan 31 15:32:40 crc kubenswrapper[4763]: I0131 15:32:40.390593 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-znznv_d877abcd-9d8f-4597-b41c-4026d954cc62/extract-content/0.log" Jan 31 15:32:40 crc kubenswrapper[4763]: I0131 15:32:40.391257 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-znznv_d877abcd-9d8f-4597-b41c-4026d954cc62/extract-utilities/0.log" Jan 31 15:32:40 crc kubenswrapper[4763]: I0131 15:32:40.512213 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-znznv_d877abcd-9d8f-4597-b41c-4026d954cc62/registry-server/0.log" Jan 31 15:32:40 crc kubenswrapper[4763]: I0131 15:32:40.553340 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gshr8_751420d5-1809-406a-bef8-8e4015d9763b/extract-utilities/0.log" Jan 31 15:32:40 crc kubenswrapper[4763]: I0131 15:32:40.734188 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gshr8_751420d5-1809-406a-bef8-8e4015d9763b/extract-content/0.log" Jan 31 15:32:40 crc kubenswrapper[4763]: I0131 15:32:40.734216 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gshr8_751420d5-1809-406a-bef8-8e4015d9763b/extract-utilities/0.log" Jan 31 15:32:40 crc kubenswrapper[4763]: I0131 15:32:40.744826 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gshr8_751420d5-1809-406a-bef8-8e4015d9763b/extract-content/0.log" Jan 31 15:32:40 crc kubenswrapper[4763]: I0131 15:32:40.899456 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gshr8_751420d5-1809-406a-bef8-8e4015d9763b/extract-utilities/0.log" Jan 31 15:32:40 crc kubenswrapper[4763]: I0131 15:32:40.905643 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gshr8_751420d5-1809-406a-bef8-8e4015d9763b/extract-content/0.log" Jan 31 15:32:41 crc kubenswrapper[4763]: I0131 15:32:41.448167 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gshr8_751420d5-1809-406a-bef8-8e4015d9763b/registry-server/0.log" Jan 31 15:32:54 crc kubenswrapper[4763]: I0131 15:32:54.041999 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:32:54 crc kubenswrapper[4763]: E0131 15:32:54.042672 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:33:06 crc kubenswrapper[4763]: I0131 15:33:06.043168 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:33:06 crc kubenswrapper[4763]: E0131 15:33:06.044109 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:33:19 crc kubenswrapper[4763]: I0131 15:33:19.042282 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:33:19 crc kubenswrapper[4763]: E0131 15:33:19.043320 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:33:32 crc kubenswrapper[4763]: I0131 15:33:32.042420 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:33:32 crc kubenswrapper[4763]: E0131 15:33:32.043179 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:33:47 crc kubenswrapper[4763]: I0131 15:33:47.043061 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:33:47 crc kubenswrapper[4763]: E0131 15:33:47.043855 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:33:54 crc kubenswrapper[4763]: I0131 15:33:54.789555 4763 generic.go:334] "Generic (PLEG): container finished" podID="ac81e32a-c558-4275-8b3e-448c797bb0a9" containerID="d21f71fa91317e79020828c6b9443709046ecbfb96dece8bedb5494c9e3e01f7" exitCode=0 Jan 31 15:33:54 crc kubenswrapper[4763]: I0131 15:33:54.789680 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n76lr/must-gather-dplx7" event={"ID":"ac81e32a-c558-4275-8b3e-448c797bb0a9","Type":"ContainerDied","Data":"d21f71fa91317e79020828c6b9443709046ecbfb96dece8bedb5494c9e3e01f7"} Jan 31 15:33:54 crc kubenswrapper[4763]: I0131 15:33:54.790571 4763 scope.go:117] "RemoveContainer" containerID="d21f71fa91317e79020828c6b9443709046ecbfb96dece8bedb5494c9e3e01f7" Jan 31 15:33:55 crc kubenswrapper[4763]: I0131 15:33:55.050425 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-n76lr_must-gather-dplx7_ac81e32a-c558-4275-8b3e-448c797bb0a9/gather/0.log" Jan 31 15:34:00 crc kubenswrapper[4763]: I0131 15:34:00.042050 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:34:00 crc kubenswrapper[4763]: E0131 15:34:00.043325 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:34:01 crc kubenswrapper[4763]: I0131 15:34:01.909763 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-n76lr/must-gather-dplx7"] Jan 31 15:34:01 crc kubenswrapper[4763]: I0131 15:34:01.910770 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-n76lr/must-gather-dplx7" podUID="ac81e32a-c558-4275-8b3e-448c797bb0a9" containerName="copy" containerID="cri-o://f64bd2bbb09dcacba8b86de4bf4d9ce5de51d4c83a47af9ccb5face75600d22f" gracePeriod=2 Jan 31 15:34:01 crc kubenswrapper[4763]: I0131 15:34:01.914609 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-n76lr/must-gather-dplx7"] Jan 31 15:34:02 crc kubenswrapper[4763]: I0131 15:34:02.275068 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-n76lr_must-gather-dplx7_ac81e32a-c558-4275-8b3e-448c797bb0a9/copy/0.log" Jan 31 15:34:02 crc kubenswrapper[4763]: I0131 15:34:02.275837 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76lr/must-gather-dplx7" Jan 31 15:34:02 crc kubenswrapper[4763]: I0131 15:34:02.423897 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ac81e32a-c558-4275-8b3e-448c797bb0a9-must-gather-output\") pod \"ac81e32a-c558-4275-8b3e-448c797bb0a9\" (UID: \"ac81e32a-c558-4275-8b3e-448c797bb0a9\") " Jan 31 15:34:02 crc kubenswrapper[4763]: I0131 15:34:02.424012 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4jp4\" (UniqueName: \"kubernetes.io/projected/ac81e32a-c558-4275-8b3e-448c797bb0a9-kube-api-access-h4jp4\") pod \"ac81e32a-c558-4275-8b3e-448c797bb0a9\" (UID: \"ac81e32a-c558-4275-8b3e-448c797bb0a9\") " Jan 31 15:34:02 crc kubenswrapper[4763]: I0131 15:34:02.434877 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac81e32a-c558-4275-8b3e-448c797bb0a9-kube-api-access-h4jp4" (OuterVolumeSpecName: "kube-api-access-h4jp4") pod "ac81e32a-c558-4275-8b3e-448c797bb0a9" (UID: "ac81e32a-c558-4275-8b3e-448c797bb0a9"). InnerVolumeSpecName "kube-api-access-h4jp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:34:02 crc kubenswrapper[4763]: I0131 15:34:02.516067 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac81e32a-c558-4275-8b3e-448c797bb0a9-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "ac81e32a-c558-4275-8b3e-448c797bb0a9" (UID: "ac81e32a-c558-4275-8b3e-448c797bb0a9"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:34:02 crc kubenswrapper[4763]: I0131 15:34:02.525547 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4jp4\" (UniqueName: \"kubernetes.io/projected/ac81e32a-c558-4275-8b3e-448c797bb0a9-kube-api-access-h4jp4\") on node \"crc\" DevicePath \"\"" Jan 31 15:34:02 crc kubenswrapper[4763]: I0131 15:34:02.525584 4763 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ac81e32a-c558-4275-8b3e-448c797bb0a9-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 31 15:34:02 crc kubenswrapper[4763]: I0131 15:34:02.849525 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-n76lr_must-gather-dplx7_ac81e32a-c558-4275-8b3e-448c797bb0a9/copy/0.log" Jan 31 15:34:02 crc kubenswrapper[4763]: I0131 15:34:02.850164 4763 generic.go:334] "Generic (PLEG): container finished" podID="ac81e32a-c558-4275-8b3e-448c797bb0a9" containerID="f64bd2bbb09dcacba8b86de4bf4d9ce5de51d4c83a47af9ccb5face75600d22f" exitCode=143 Jan 31 15:34:02 crc kubenswrapper[4763]: I0131 15:34:02.850233 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76lr/must-gather-dplx7" Jan 31 15:34:02 crc kubenswrapper[4763]: I0131 15:34:02.850244 4763 scope.go:117] "RemoveContainer" containerID="f64bd2bbb09dcacba8b86de4bf4d9ce5de51d4c83a47af9ccb5face75600d22f" Jan 31 15:34:02 crc kubenswrapper[4763]: I0131 15:34:02.869405 4763 scope.go:117] "RemoveContainer" containerID="d21f71fa91317e79020828c6b9443709046ecbfb96dece8bedb5494c9e3e01f7" Jan 31 15:34:02 crc kubenswrapper[4763]: I0131 15:34:02.925019 4763 scope.go:117] "RemoveContainer" containerID="f64bd2bbb09dcacba8b86de4bf4d9ce5de51d4c83a47af9ccb5face75600d22f" Jan 31 15:34:02 crc kubenswrapper[4763]: E0131 15:34:02.925522 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f64bd2bbb09dcacba8b86de4bf4d9ce5de51d4c83a47af9ccb5face75600d22f\": container with ID starting with f64bd2bbb09dcacba8b86de4bf4d9ce5de51d4c83a47af9ccb5face75600d22f not found: ID does not exist" containerID="f64bd2bbb09dcacba8b86de4bf4d9ce5de51d4c83a47af9ccb5face75600d22f" Jan 31 15:34:02 crc kubenswrapper[4763]: I0131 15:34:02.925568 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f64bd2bbb09dcacba8b86de4bf4d9ce5de51d4c83a47af9ccb5face75600d22f"} err="failed to get container status \"f64bd2bbb09dcacba8b86de4bf4d9ce5de51d4c83a47af9ccb5face75600d22f\": rpc error: code = NotFound desc = could not find container \"f64bd2bbb09dcacba8b86de4bf4d9ce5de51d4c83a47af9ccb5face75600d22f\": container with ID starting with f64bd2bbb09dcacba8b86de4bf4d9ce5de51d4c83a47af9ccb5face75600d22f not found: ID does not exist" Jan 31 15:34:02 crc kubenswrapper[4763]: I0131 15:34:02.925598 4763 scope.go:117] "RemoveContainer" containerID="d21f71fa91317e79020828c6b9443709046ecbfb96dece8bedb5494c9e3e01f7" Jan 31 15:34:02 crc kubenswrapper[4763]: E0131 15:34:02.926657 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d21f71fa91317e79020828c6b9443709046ecbfb96dece8bedb5494c9e3e01f7\": container with ID starting with d21f71fa91317e79020828c6b9443709046ecbfb96dece8bedb5494c9e3e01f7 not found: ID does not exist" containerID="d21f71fa91317e79020828c6b9443709046ecbfb96dece8bedb5494c9e3e01f7" Jan 31 15:34:02 crc kubenswrapper[4763]: I0131 15:34:02.926712 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d21f71fa91317e79020828c6b9443709046ecbfb96dece8bedb5494c9e3e01f7"} err="failed to get container status \"d21f71fa91317e79020828c6b9443709046ecbfb96dece8bedb5494c9e3e01f7\": rpc error: code = NotFound desc = could not find container \"d21f71fa91317e79020828c6b9443709046ecbfb96dece8bedb5494c9e3e01f7\": container with ID starting with d21f71fa91317e79020828c6b9443709046ecbfb96dece8bedb5494c9e3e01f7 not found: ID does not exist" Jan 31 15:34:03 crc kubenswrapper[4763]: I0131 15:34:03.049112 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac81e32a-c558-4275-8b3e-448c797bb0a9" path="/var/lib/kubelet/pods/ac81e32a-c558-4275-8b3e-448c797bb0a9/volumes" Jan 31 15:34:13 crc kubenswrapper[4763]: I0131 15:34:13.042871 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:34:13 crc kubenswrapper[4763]: E0131 15:34:13.044224 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:34:28 crc kubenswrapper[4763]: I0131 15:34:28.041547 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:34:28 crc kubenswrapper[4763]: E0131 15:34:28.042240 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:34:39 crc kubenswrapper[4763]: I0131 15:34:39.041674 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:34:39 crc kubenswrapper[4763]: E0131 15:34:39.042738 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:34:52 crc kubenswrapper[4763]: I0131 15:34:52.041578 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:34:52 crc kubenswrapper[4763]: E0131 15:34:52.043360 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:35:05 crc kubenswrapper[4763]: I0131 15:35:05.041957 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:35:05 crc kubenswrapper[4763]: E0131 15:35:05.042661 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:35:16 crc kubenswrapper[4763]: I0131 15:35:16.042212 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:35:16 crc kubenswrapper[4763]: E0131 15:35:16.043087 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:35:29 crc kubenswrapper[4763]: I0131 15:35:29.043120 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:35:29 crc kubenswrapper[4763]: E0131 15:35:29.044929 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:35:43 crc kubenswrapper[4763]: I0131 15:35:43.323307 4763 scope.go:117] "RemoveContainer" containerID="2bda736f39ac7738f7e5a8baee5357943f15ea2730a9ddbd6eb3bf5d9f6ff0ff" Jan 31 15:35:43 crc kubenswrapper[4763]: I0131 15:35:43.350867 4763 scope.go:117] "RemoveContainer" containerID="6c1accfe18801fff7cebc38d02887e93a257691b6f17a553032f19d001556184" Jan 31 15:35:44 crc kubenswrapper[4763]: I0131 15:35:44.042001 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:35:44 crc kubenswrapper[4763]: E0131 15:35:44.042418 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:35:57 crc kubenswrapper[4763]: I0131 15:35:57.041822 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:35:57 crc kubenswrapper[4763]: E0131 15:35:57.043955 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.591186 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-99pvx"] Jan 31 15:36:02 crc kubenswrapper[4763]: E0131 15:36:02.592230 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4b48077-151c-45b6-bc68-224b69ea1311" containerName="extract-utilities" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.592255 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b48077-151c-45b6-bc68-224b69ea1311" containerName="extract-utilities" Jan 31 15:36:02 crc kubenswrapper[4763]: E0131 15:36:02.592289 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac81e32a-c558-4275-8b3e-448c797bb0a9" containerName="copy" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.592301 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac81e32a-c558-4275-8b3e-448c797bb0a9" containerName="copy" Jan 31 15:36:02 crc kubenswrapper[4763]: E0131 15:36:02.592317 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4b48077-151c-45b6-bc68-224b69ea1311" containerName="registry-server" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.592329 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b48077-151c-45b6-bc68-224b69ea1311" containerName="registry-server" Jan 31 15:36:02 crc kubenswrapper[4763]: E0131 15:36:02.592362 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac81e32a-c558-4275-8b3e-448c797bb0a9" containerName="gather" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.592374 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac81e32a-c558-4275-8b3e-448c797bb0a9" containerName="gather" Jan 31 15:36:02 crc kubenswrapper[4763]: E0131 15:36:02.592385 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4b48077-151c-45b6-bc68-224b69ea1311" containerName="extract-content" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.592396 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b48077-151c-45b6-bc68-224b69ea1311" containerName="extract-content" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.592622 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac81e32a-c558-4275-8b3e-448c797bb0a9" containerName="copy" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.592649 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4b48077-151c-45b6-bc68-224b69ea1311" containerName="registry-server" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.592678 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac81e32a-c558-4275-8b3e-448c797bb0a9" containerName="gather" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.603896 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-99pvx" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.635160 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-99pvx"] Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.721054 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7phzv\" (UniqueName: \"kubernetes.io/projected/a0dd2603-0018-4872-95f3-a5dd2f85e8c5-kube-api-access-7phzv\") pod \"certified-operators-99pvx\" (UID: \"a0dd2603-0018-4872-95f3-a5dd2f85e8c5\") " pod="openshift-marketplace/certified-operators-99pvx" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.721219 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0dd2603-0018-4872-95f3-a5dd2f85e8c5-catalog-content\") pod \"certified-operators-99pvx\" (UID: \"a0dd2603-0018-4872-95f3-a5dd2f85e8c5\") " pod="openshift-marketplace/certified-operators-99pvx" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.721269 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0dd2603-0018-4872-95f3-a5dd2f85e8c5-utilities\") pod \"certified-operators-99pvx\" (UID: \"a0dd2603-0018-4872-95f3-a5dd2f85e8c5\") " pod="openshift-marketplace/certified-operators-99pvx" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.822304 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0dd2603-0018-4872-95f3-a5dd2f85e8c5-catalog-content\") pod \"certified-operators-99pvx\" (UID: \"a0dd2603-0018-4872-95f3-a5dd2f85e8c5\") " pod="openshift-marketplace/certified-operators-99pvx" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.822385 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0dd2603-0018-4872-95f3-a5dd2f85e8c5-utilities\") pod \"certified-operators-99pvx\" (UID: \"a0dd2603-0018-4872-95f3-a5dd2f85e8c5\") " pod="openshift-marketplace/certified-operators-99pvx" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.822439 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7phzv\" (UniqueName: \"kubernetes.io/projected/a0dd2603-0018-4872-95f3-a5dd2f85e8c5-kube-api-access-7phzv\") pod \"certified-operators-99pvx\" (UID: \"a0dd2603-0018-4872-95f3-a5dd2f85e8c5\") " pod="openshift-marketplace/certified-operators-99pvx" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.822898 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0dd2603-0018-4872-95f3-a5dd2f85e8c5-catalog-content\") pod \"certified-operators-99pvx\" (UID: \"a0dd2603-0018-4872-95f3-a5dd2f85e8c5\") " pod="openshift-marketplace/certified-operators-99pvx" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.823022 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0dd2603-0018-4872-95f3-a5dd2f85e8c5-utilities\") pod \"certified-operators-99pvx\" (UID: \"a0dd2603-0018-4872-95f3-a5dd2f85e8c5\") " pod="openshift-marketplace/certified-operators-99pvx" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.857527 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7phzv\" (UniqueName: \"kubernetes.io/projected/a0dd2603-0018-4872-95f3-a5dd2f85e8c5-kube-api-access-7phzv\") pod \"certified-operators-99pvx\" (UID: \"a0dd2603-0018-4872-95f3-a5dd2f85e8c5\") " pod="openshift-marketplace/certified-operators-99pvx" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.938267 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-99pvx" Jan 31 15:36:03 crc kubenswrapper[4763]: I0131 15:36:03.366898 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-99pvx"] Jan 31 15:36:03 crc kubenswrapper[4763]: I0131 15:36:03.877543 4763 generic.go:334] "Generic (PLEG): container finished" podID="a0dd2603-0018-4872-95f3-a5dd2f85e8c5" containerID="1af8de92490840d8ec77947409013083536dae7bbd79919db4620f18e8fdfc1d" exitCode=0 Jan 31 15:36:03 crc kubenswrapper[4763]: I0131 15:36:03.877646 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99pvx" event={"ID":"a0dd2603-0018-4872-95f3-a5dd2f85e8c5","Type":"ContainerDied","Data":"1af8de92490840d8ec77947409013083536dae7bbd79919db4620f18e8fdfc1d"} Jan 31 15:36:03 crc kubenswrapper[4763]: I0131 15:36:03.877718 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99pvx" event={"ID":"a0dd2603-0018-4872-95f3-a5dd2f85e8c5","Type":"ContainerStarted","Data":"9f7563e8815117479b3cf1c11a99a4e602bb7abc60c65aa9020683cd323e6e67"} Jan 31 15:36:03 crc kubenswrapper[4763]: I0131 15:36:03.879741 4763 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 15:36:04 crc kubenswrapper[4763]: I0131 15:36:04.890026 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99pvx" event={"ID":"a0dd2603-0018-4872-95f3-a5dd2f85e8c5","Type":"ContainerStarted","Data":"9aaf54c27cda76a1695560d8719f9acc95e5837e22485e9da72f5f1575464cff"} Jan 31 15:36:05 crc kubenswrapper[4763]: I0131 15:36:05.898578 4763 generic.go:334] "Generic (PLEG): container finished" podID="a0dd2603-0018-4872-95f3-a5dd2f85e8c5" containerID="9aaf54c27cda76a1695560d8719f9acc95e5837e22485e9da72f5f1575464cff" exitCode=0 Jan 31 15:36:05 crc kubenswrapper[4763]: I0131 15:36:05.898722 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99pvx" event={"ID":"a0dd2603-0018-4872-95f3-a5dd2f85e8c5","Type":"ContainerDied","Data":"9aaf54c27cda76a1695560d8719f9acc95e5837e22485e9da72f5f1575464cff"} Jan 31 15:36:06 crc kubenswrapper[4763]: I0131 15:36:06.908499 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99pvx" event={"ID":"a0dd2603-0018-4872-95f3-a5dd2f85e8c5","Type":"ContainerStarted","Data":"3158a1cb8f431ebb28e5bade6c51bb8837ebab86d65c3d6db8026c55b094845f"} Jan 31 15:36:06 crc kubenswrapper[4763]: I0131 15:36:06.937380 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-99pvx" podStartSLOduration=2.526957426 podStartE2EDuration="4.937362935s" podCreationTimestamp="2026-01-31 15:36:02 +0000 UTC" firstStartedPulling="2026-01-31 15:36:03.87948459 +0000 UTC m=+2483.634222883" lastFinishedPulling="2026-01-31 15:36:06.289890089 +0000 UTC m=+2486.044628392" observedRunningTime="2026-01-31 15:36:06.934018306 +0000 UTC m=+2486.688756629" watchObservedRunningTime="2026-01-31 15:36:06.937362935 +0000 UTC m=+2486.692101228" Jan 31 15:36:11 crc kubenswrapper[4763]: I0131 15:36:11.052778 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:36:11 crc kubenswrapper[4763]: E0131 15:36:11.053599 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:36:12 crc kubenswrapper[4763]: I0131 15:36:12.938765 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-99pvx" Jan 31 15:36:12 crc kubenswrapper[4763]: I0131 15:36:12.938893 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-99pvx" Jan 31 15:36:13 crc kubenswrapper[4763]: I0131 15:36:13.018043 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-99pvx" Jan 31 15:36:13 crc kubenswrapper[4763]: I0131 15:36:13.062515 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-99pvx" Jan 31 15:36:13 crc kubenswrapper[4763]: I0131 15:36:13.254768 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-99pvx"] Jan 31 15:36:14 crc kubenswrapper[4763]: I0131 15:36:14.966940 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-99pvx" podUID="a0dd2603-0018-4872-95f3-a5dd2f85e8c5" containerName="registry-server" containerID="cri-o://3158a1cb8f431ebb28e5bade6c51bb8837ebab86d65c3d6db8026c55b094845f" gracePeriod=2 Jan 31 15:36:15 crc kubenswrapper[4763]: I0131 15:36:15.425111 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-99pvx" Jan 31 15:36:15 crc kubenswrapper[4763]: I0131 15:36:15.527733 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0dd2603-0018-4872-95f3-a5dd2f85e8c5-utilities\") pod \"a0dd2603-0018-4872-95f3-a5dd2f85e8c5\" (UID: \"a0dd2603-0018-4872-95f3-a5dd2f85e8c5\") " Jan 31 15:36:15 crc kubenswrapper[4763]: I0131 15:36:15.527810 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0dd2603-0018-4872-95f3-a5dd2f85e8c5-catalog-content\") pod \"a0dd2603-0018-4872-95f3-a5dd2f85e8c5\" (UID: \"a0dd2603-0018-4872-95f3-a5dd2f85e8c5\") " Jan 31 15:36:15 crc kubenswrapper[4763]: I0131 15:36:15.527858 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7phzv\" (UniqueName: \"kubernetes.io/projected/a0dd2603-0018-4872-95f3-a5dd2f85e8c5-kube-api-access-7phzv\") pod \"a0dd2603-0018-4872-95f3-a5dd2f85e8c5\" (UID: \"a0dd2603-0018-4872-95f3-a5dd2f85e8c5\") " Jan 31 15:36:15 crc kubenswrapper[4763]: I0131 15:36:15.529227 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0dd2603-0018-4872-95f3-a5dd2f85e8c5-utilities" (OuterVolumeSpecName: "utilities") pod "a0dd2603-0018-4872-95f3-a5dd2f85e8c5" (UID: "a0dd2603-0018-4872-95f3-a5dd2f85e8c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:36:15 crc kubenswrapper[4763]: I0131 15:36:15.536385 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0dd2603-0018-4872-95f3-a5dd2f85e8c5-kube-api-access-7phzv" (OuterVolumeSpecName: "kube-api-access-7phzv") pod "a0dd2603-0018-4872-95f3-a5dd2f85e8c5" (UID: "a0dd2603-0018-4872-95f3-a5dd2f85e8c5"). InnerVolumeSpecName "kube-api-access-7phzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:36:15 crc kubenswrapper[4763]: I0131 15:36:15.629666 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0dd2603-0018-4872-95f3-a5dd2f85e8c5-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:36:15 crc kubenswrapper[4763]: I0131 15:36:15.629713 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7phzv\" (UniqueName: \"kubernetes.io/projected/a0dd2603-0018-4872-95f3-a5dd2f85e8c5-kube-api-access-7phzv\") on node \"crc\" DevicePath \"\"" Jan 31 15:36:15 crc kubenswrapper[4763]: I0131 15:36:15.977469 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0dd2603-0018-4872-95f3-a5dd2f85e8c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0dd2603-0018-4872-95f3-a5dd2f85e8c5" (UID: "a0dd2603-0018-4872-95f3-a5dd2f85e8c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:36:15 crc kubenswrapper[4763]: I0131 15:36:15.978632 4763 generic.go:334] "Generic (PLEG): container finished" podID="a0dd2603-0018-4872-95f3-a5dd2f85e8c5" containerID="3158a1cb8f431ebb28e5bade6c51bb8837ebab86d65c3d6db8026c55b094845f" exitCode=0 Jan 31 15:36:15 crc kubenswrapper[4763]: I0131 15:36:15.978732 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99pvx" event={"ID":"a0dd2603-0018-4872-95f3-a5dd2f85e8c5","Type":"ContainerDied","Data":"3158a1cb8f431ebb28e5bade6c51bb8837ebab86d65c3d6db8026c55b094845f"} Jan 31 15:36:15 crc kubenswrapper[4763]: I0131 15:36:15.978778 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99pvx" event={"ID":"a0dd2603-0018-4872-95f3-a5dd2f85e8c5","Type":"ContainerDied","Data":"9f7563e8815117479b3cf1c11a99a4e602bb7abc60c65aa9020683cd323e6e67"} Jan 31 15:36:15 crc kubenswrapper[4763]: I0131 15:36:15.978807 4763 scope.go:117] "RemoveContainer" containerID="3158a1cb8f431ebb28e5bade6c51bb8837ebab86d65c3d6db8026c55b094845f" Jan 31 15:36:15 crc kubenswrapper[4763]: I0131 15:36:15.979058 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-99pvx" Jan 31 15:36:16 crc kubenswrapper[4763]: I0131 15:36:16.009947 4763 scope.go:117] "RemoveContainer" containerID="9aaf54c27cda76a1695560d8719f9acc95e5837e22485e9da72f5f1575464cff" Jan 31 15:36:16 crc kubenswrapper[4763]: I0131 15:36:16.021507 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-99pvx"] Jan 31 15:36:16 crc kubenswrapper[4763]: I0131 15:36:16.042770 4763 scope.go:117] "RemoveContainer" containerID="1af8de92490840d8ec77947409013083536dae7bbd79919db4620f18e8fdfc1d" Jan 31 15:36:16 crc kubenswrapper[4763]: I0131 15:36:16.042985 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0dd2603-0018-4872-95f3-a5dd2f85e8c5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:36:16 crc kubenswrapper[4763]: I0131 15:36:16.050594 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-99pvx"] Jan 31 15:36:16 crc kubenswrapper[4763]: I0131 15:36:16.066429 4763 scope.go:117] "RemoveContainer" containerID="3158a1cb8f431ebb28e5bade6c51bb8837ebab86d65c3d6db8026c55b094845f" Jan 31 15:36:16 crc kubenswrapper[4763]: E0131 15:36:16.067040 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3158a1cb8f431ebb28e5bade6c51bb8837ebab86d65c3d6db8026c55b094845f\": container with ID starting with 3158a1cb8f431ebb28e5bade6c51bb8837ebab86d65c3d6db8026c55b094845f not found: ID does not exist" containerID="3158a1cb8f431ebb28e5bade6c51bb8837ebab86d65c3d6db8026c55b094845f" Jan 31 15:36:16 crc kubenswrapper[4763]: I0131 15:36:16.067083 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3158a1cb8f431ebb28e5bade6c51bb8837ebab86d65c3d6db8026c55b094845f"} err="failed to get container status \"3158a1cb8f431ebb28e5bade6c51bb8837ebab86d65c3d6db8026c55b094845f\": rpc error: code = NotFound desc = could not find container \"3158a1cb8f431ebb28e5bade6c51bb8837ebab86d65c3d6db8026c55b094845f\": container with ID starting with 3158a1cb8f431ebb28e5bade6c51bb8837ebab86d65c3d6db8026c55b094845f not found: ID does not exist" Jan 31 15:36:16 crc kubenswrapper[4763]: I0131 15:36:16.067107 4763 scope.go:117] "RemoveContainer" containerID="9aaf54c27cda76a1695560d8719f9acc95e5837e22485e9da72f5f1575464cff" Jan 31 15:36:16 crc kubenswrapper[4763]: E0131 15:36:16.067381 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9aaf54c27cda76a1695560d8719f9acc95e5837e22485e9da72f5f1575464cff\": container with ID starting with 9aaf54c27cda76a1695560d8719f9acc95e5837e22485e9da72f5f1575464cff not found: ID does not exist" containerID="9aaf54c27cda76a1695560d8719f9acc95e5837e22485e9da72f5f1575464cff" Jan 31 15:36:16 crc kubenswrapper[4763]: I0131 15:36:16.067411 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aaf54c27cda76a1695560d8719f9acc95e5837e22485e9da72f5f1575464cff"} err="failed to get container status \"9aaf54c27cda76a1695560d8719f9acc95e5837e22485e9da72f5f1575464cff\": rpc error: code = NotFound desc = could not find container \"9aaf54c27cda76a1695560d8719f9acc95e5837e22485e9da72f5f1575464cff\": container with ID starting with 9aaf54c27cda76a1695560d8719f9acc95e5837e22485e9da72f5f1575464cff not found: ID does not exist" Jan 31 15:36:16 crc kubenswrapper[4763]: I0131 15:36:16.067435 4763 scope.go:117] "RemoveContainer" containerID="1af8de92490840d8ec77947409013083536dae7bbd79919db4620f18e8fdfc1d" Jan 31 15:36:16 crc kubenswrapper[4763]: E0131 15:36:16.067651 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1af8de92490840d8ec77947409013083536dae7bbd79919db4620f18e8fdfc1d\": container with ID starting with 1af8de92490840d8ec77947409013083536dae7bbd79919db4620f18e8fdfc1d not found: ID does not exist" containerID="1af8de92490840d8ec77947409013083536dae7bbd79919db4620f18e8fdfc1d" Jan 31 15:36:16 crc kubenswrapper[4763]: I0131 15:36:16.067684 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1af8de92490840d8ec77947409013083536dae7bbd79919db4620f18e8fdfc1d"} err="failed to get container status \"1af8de92490840d8ec77947409013083536dae7bbd79919db4620f18e8fdfc1d\": rpc error: code = NotFound desc = could not find container \"1af8de92490840d8ec77947409013083536dae7bbd79919db4620f18e8fdfc1d\": container with ID starting with 1af8de92490840d8ec77947409013083536dae7bbd79919db4620f18e8fdfc1d not found: ID does not exist" Jan 31 15:36:17 crc kubenswrapper[4763]: I0131 15:36:17.048330 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0dd2603-0018-4872-95f3-a5dd2f85e8c5" path="/var/lib/kubelet/pods/a0dd2603-0018-4872-95f3-a5dd2f85e8c5/volumes" Jan 31 15:36:26 crc kubenswrapper[4763]: I0131 15:36:26.042735 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:36:26 crc kubenswrapper[4763]: E0131 15:36:26.044077 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:36:37 crc kubenswrapper[4763]: I0131 15:36:37.042101 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:36:37 crc kubenswrapper[4763]: E0131 15:36:37.042837 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:36:47 crc kubenswrapper[4763]: I0131 15:36:47.596113 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wm82s"] Jan 31 15:36:47 crc kubenswrapper[4763]: E0131 15:36:47.597297 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0dd2603-0018-4872-95f3-a5dd2f85e8c5" containerName="registry-server" Jan 31 15:36:47 crc kubenswrapper[4763]: I0131 15:36:47.597320 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0dd2603-0018-4872-95f3-a5dd2f85e8c5" containerName="registry-server" Jan 31 15:36:47 crc kubenswrapper[4763]: E0131 15:36:47.597349 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0dd2603-0018-4872-95f3-a5dd2f85e8c5" containerName="extract-utilities" Jan 31 15:36:47 crc kubenswrapper[4763]: I0131 15:36:47.597360 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0dd2603-0018-4872-95f3-a5dd2f85e8c5" containerName="extract-utilities" Jan 31 15:36:47 crc kubenswrapper[4763]: E0131 15:36:47.597376 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0dd2603-0018-4872-95f3-a5dd2f85e8c5" containerName="extract-content" Jan 31 15:36:47 crc kubenswrapper[4763]: I0131 15:36:47.597386 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0dd2603-0018-4872-95f3-a5dd2f85e8c5" containerName="extract-content" Jan 31 15:36:47 crc kubenswrapper[4763]: I0131 15:36:47.597599 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0dd2603-0018-4872-95f3-a5dd2f85e8c5" containerName="registry-server" Jan 31 15:36:47 crc kubenswrapper[4763]: I0131 15:36:47.599345 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wm82s" Jan 31 15:36:47 crc kubenswrapper[4763]: I0131 15:36:47.621364 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wm82s"] Jan 31 15:36:47 crc kubenswrapper[4763]: I0131 15:36:47.660392 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a483dee8-9edb-4b09-ac71-5928ea407983-catalog-content\") pod \"community-operators-wm82s\" (UID: \"a483dee8-9edb-4b09-ac71-5928ea407983\") " pod="openshift-marketplace/community-operators-wm82s" Jan 31 15:36:47 crc kubenswrapper[4763]: I0131 15:36:47.660662 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cttck\" (UniqueName: \"kubernetes.io/projected/a483dee8-9edb-4b09-ac71-5928ea407983-kube-api-access-cttck\") pod \"community-operators-wm82s\" (UID: \"a483dee8-9edb-4b09-ac71-5928ea407983\") " pod="openshift-marketplace/community-operators-wm82s" Jan 31 15:36:47 crc kubenswrapper[4763]: I0131 15:36:47.660948 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a483dee8-9edb-4b09-ac71-5928ea407983-utilities\") pod \"community-operators-wm82s\" (UID: \"a483dee8-9edb-4b09-ac71-5928ea407983\") " pod="openshift-marketplace/community-operators-wm82s" Jan 31 15:36:47 crc kubenswrapper[4763]: I0131 15:36:47.762171 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a483dee8-9edb-4b09-ac71-5928ea407983-utilities\") pod \"community-operators-wm82s\" (UID: \"a483dee8-9edb-4b09-ac71-5928ea407983\") " pod="openshift-marketplace/community-operators-wm82s" Jan 31 15:36:47 crc kubenswrapper[4763]: I0131 15:36:47.762246 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a483dee8-9edb-4b09-ac71-5928ea407983-catalog-content\") pod \"community-operators-wm82s\" (UID: \"a483dee8-9edb-4b09-ac71-5928ea407983\") " pod="openshift-marketplace/community-operators-wm82s" Jan 31 15:36:47 crc kubenswrapper[4763]: I0131 15:36:47.762376 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cttck\" (UniqueName: \"kubernetes.io/projected/a483dee8-9edb-4b09-ac71-5928ea407983-kube-api-access-cttck\") pod \"community-operators-wm82s\" (UID: \"a483dee8-9edb-4b09-ac71-5928ea407983\") " pod="openshift-marketplace/community-operators-wm82s" Jan 31 15:36:47 crc kubenswrapper[4763]: I0131 15:36:47.762901 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a483dee8-9edb-4b09-ac71-5928ea407983-utilities\") pod \"community-operators-wm82s\" (UID: \"a483dee8-9edb-4b09-ac71-5928ea407983\") " pod="openshift-marketplace/community-operators-wm82s" Jan 31 15:36:47 crc kubenswrapper[4763]: I0131 15:36:47.762921 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a483dee8-9edb-4b09-ac71-5928ea407983-catalog-content\") pod \"community-operators-wm82s\" (UID: \"a483dee8-9edb-4b09-ac71-5928ea407983\") " pod="openshift-marketplace/community-operators-wm82s" Jan 31 15:36:47 crc kubenswrapper[4763]: I0131 15:36:47.798253 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cttck\" (UniqueName: \"kubernetes.io/projected/a483dee8-9edb-4b09-ac71-5928ea407983-kube-api-access-cttck\") pod \"community-operators-wm82s\" (UID: \"a483dee8-9edb-4b09-ac71-5928ea407983\") " pod="openshift-marketplace/community-operators-wm82s" Jan 31 15:36:47 crc kubenswrapper[4763]: I0131 15:36:47.977913 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wm82s" Jan 31 15:36:48 crc kubenswrapper[4763]: I0131 15:36:48.261185 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wm82s"] Jan 31 15:36:49 crc kubenswrapper[4763]: I0131 15:36:49.269140 4763 generic.go:334] "Generic (PLEG): container finished" podID="a483dee8-9edb-4b09-ac71-5928ea407983" containerID="6579037a752d918b854e9408a99fb0aec45d9ddfc6b9c15fc177bb37078323da" exitCode=0 Jan 31 15:36:49 crc kubenswrapper[4763]: I0131 15:36:49.269180 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wm82s" event={"ID":"a483dee8-9edb-4b09-ac71-5928ea407983","Type":"ContainerDied","Data":"6579037a752d918b854e9408a99fb0aec45d9ddfc6b9c15fc177bb37078323da"} Jan 31 15:36:49 crc kubenswrapper[4763]: I0131 15:36:49.269206 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wm82s" event={"ID":"a483dee8-9edb-4b09-ac71-5928ea407983","Type":"ContainerStarted","Data":"d982eaf4dc2f52424d8e73a210fca52082d9fdbb86957f421a04dc07622e9716"} Jan 31 15:36:50 crc kubenswrapper[4763]: I0131 15:36:50.279482 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wm82s" event={"ID":"a483dee8-9edb-4b09-ac71-5928ea407983","Type":"ContainerStarted","Data":"e1e01423a31ee001ca1b586143de2b67256a3d68a18437b7dc3aee1015c8d01a"} Jan 31 15:36:51 crc kubenswrapper[4763]: I0131 15:36:51.290186 4763 generic.go:334] "Generic (PLEG): container finished" podID="a483dee8-9edb-4b09-ac71-5928ea407983" containerID="e1e01423a31ee001ca1b586143de2b67256a3d68a18437b7dc3aee1015c8d01a" exitCode=0 Jan 31 15:36:51 crc kubenswrapper[4763]: I0131 15:36:51.291227 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wm82s" event={"ID":"a483dee8-9edb-4b09-ac71-5928ea407983","Type":"ContainerDied","Data":"e1e01423a31ee001ca1b586143de2b67256a3d68a18437b7dc3aee1015c8d01a"} Jan 31 15:36:52 crc kubenswrapper[4763]: I0131 15:36:52.041897 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:36:52 crc kubenswrapper[4763]: I0131 15:36:52.298922 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wm82s" event={"ID":"a483dee8-9edb-4b09-ac71-5928ea407983","Type":"ContainerStarted","Data":"5710e59e0166b707c096a7280b44b77a18a25f1fd97234bcd2d262a2aeebe5d3"} Jan 31 15:36:52 crc kubenswrapper[4763]: I0131 15:36:52.301807 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerStarted","Data":"5e334e693bcb3afa29700338be95ceb997b6b7fdeaf42fb8a44420ab311bacf3"} Jan 31 15:36:52 crc kubenswrapper[4763]: I0131 15:36:52.320163 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wm82s" podStartSLOduration=2.912226822 podStartE2EDuration="5.320147481s" podCreationTimestamp="2026-01-31 15:36:47 +0000 UTC" firstStartedPulling="2026-01-31 15:36:49.270748357 +0000 UTC m=+2529.025486650" lastFinishedPulling="2026-01-31 15:36:51.678669016 +0000 UTC m=+2531.433407309" observedRunningTime="2026-01-31 15:36:52.316951995 +0000 UTC m=+2532.071690288" watchObservedRunningTime="2026-01-31 15:36:52.320147481 +0000 UTC m=+2532.074885774" Jan 31 15:36:57 crc kubenswrapper[4763]: I0131 15:36:57.979058 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wm82s" Jan 31 15:36:57 crc kubenswrapper[4763]: I0131 15:36:57.979568 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wm82s" Jan 31 15:36:58 crc kubenswrapper[4763]: I0131 15:36:58.033065 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wm82s" Jan 31 15:36:58 crc kubenswrapper[4763]: I0131 15:36:58.387789 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wm82s" Jan 31 15:36:58 crc kubenswrapper[4763]: I0131 15:36:58.433904 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wm82s"] Jan 31 15:37:00 crc kubenswrapper[4763]: I0131 15:37:00.365726 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wm82s" podUID="a483dee8-9edb-4b09-ac71-5928ea407983" containerName="registry-server" containerID="cri-o://5710e59e0166b707c096a7280b44b77a18a25f1fd97234bcd2d262a2aeebe5d3" gracePeriod=2 var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515137420646024455 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015137420647017373 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015137413354016513 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015137413354015463 5ustar corecore